338
Views
0
CrossRef citations to date
0
Altmetric
Research Article

Quadratic Neural Networks for Solving Inverse Problems

, ORCID Icon &
Pages 112-135 | Received 26 Sep 2023, Accepted 30 Jan 2024, Published online: 22 Feb 2024

References

  • Natterer, F. (1977). Regularisierung schlecht gestellter Probleme durch Projektionsverfahren. Numerische Mathematik 28(3):329–341. DOI: 10.1007/BF01389972.
  • Obmann, D., Schwab, J., Haltmeier, M. (2021). Deep synthesis network for regularizing inverse problems. Inverse Problems 37(1):015005. DOI: 10.1088/1361-6420/abc7cd.
  • Scherzer, O., Hofmann, B., Nashed, Z. (2023). Gauss–Newton method for solving linear inverse problems with neural network coders. Sampling Theory Signal Process. Data Anal. 21(2):25. DOI: 10.1007/s43670-023-00066-6.
  • Pinkus, A. (1999). Approximation theory of the MLP model in neural networks. Acta Numerica 8:143–195. DOI: 10.1017/S0962492900002919.
  • Barron, A. R. (1993). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39(3):930–945. DOI: 10.1109/18.256500.
  • Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4):303–314. DOI: 10.1007/BF02551274.
  • Hornik, K., Stinchcombe, M., White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Netw. 2(5):359–366. DOI: 10.1016/0893-6080(89)90020-8.
  • Leshno, M., Lin, V. Y., Pinkus, A., Schocken, S. (1993). Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6):861–867. DOI: 10.1016/s0893-6080(05)80131-5.
  • Mhaskar, H. N. (1993). Approximation properties of a multilayered feedforward artificial neural network. Adv. Comput. Math. 1(1):61–80. DOI: 10.1007/BF02070821.
  • Siegel, J. W., Xu, J. (2022). Sharp bounds on the approximation rates, metric entropy, and n-Widths of shallow neural networks. Found. Comput. Math. DOI: 10.1007/s10208-022-09595-3.
  • Siegel, J. W., Xu, J. (2023). Characterization of the variation spaces corresponding to shallow neural networks. Constr. Approx. 57(3):1109–1132. DOI: 10.1007/s00365-023-09626-4.
  • Wager, S., Wang, S., Liang, P. (2013). Dropout training as adaptive regularization. In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1. Curran Associates Inc., pp. 351–359.
  • Manita, O., Peletier, M., Portegies, J., Sanders, J., Senen-Cerda, A. (2022). Universal approximation in dropout neural networks. J. Mach. Learn. Res. 23(19):1–46.
  • Zhou, D.-X. (2018). Deep distributed convolutional neural networks: universality. Anal. Appl. 16(6):895–919. DOI: 10.1142/S0219530518500124.
  • Zhou, D.-X. (2020). Universality of deep convolutional neural networks. Appl. Comput. Harmon. Anal. 48(2):787–794. DOI: 10.1016/j.acha.2019.06.004.
  • Schäfer, A. M., Zimmermann, H.-G. (2007). Recurrent neural networks are universal approximators. Int. J. Neural Syst. 17(4):253–263. DOI: 10.1142/s0129065707001111.
  • Hammer, B. (2000). Learning with Recurrent Neural Networks. Lecture Notes in Control and Information Sciences. London: Springer.
  • White. (1989). An additional hidden unit test for neglected nonlinearity in multilayer feedforward networks. In: International Joint Conference on Neural Networks. DOI: 10.1109/ijcnn.1989.118281.
  • Pao, Y.-H., Park, G.-H., Sobajic, D. J. (1994). Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180. DOI: 10.1016/0925-2312(94)90053-1.
  • Igelnik, B., Pao, Y.-H. (1995). Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 6(6):1320–1329. DOI: 10.1109/72.471375.
  • Gelenbe, E., Mao, Z.-W., Li, Y.-D. (1999). Approximation by random networks with bounded number of layers. In: Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468). DOI: 10.1109/nnsp.1999.788135.
  • Tsapanos, N., Tefas, A., Nikolaidis, N., Pitas, I. (2019). Neurons With paraboloid decision boundaries for improved neural network classification performance. IEEE Trans Neural Netw Learn Syst 30(1):284–294. DOI: 10.1109/tnnls.2018.2839655.
  • Fan, F., Xiong, J., Wang, G. (2020). Universal approximation with quadratic deep networks. Neural Netw. 124:383–392. DOI: 10.1016/j.neunet.2020.01.007.
  • Deng, D., Han, Y. (2009). Harmonic Analysis on Spaces of Homogeneous Type. Berlin, Heidelberg: Springer. DOI: 10.1007/978-3-540-88745-4.
  • Buhmann, M. D. (2003). Radial Basis Functions. Cambridge: Cambridge University Press. DOI: 10.1017/cbo9780511543241.
  • Dugundji, J. (1978). Topology, 2nd ed. Boston, MA: Allyn and Bacon, Inc.
  • Kelley, J. L. (1955). General Topology. Toronto-New York-London: D. Van Nostrand Company.
  • Shepp, L. A., Logan, B. F. (1974). The Fourier reconstruction of a head section. IEEE Trans. Nucl. Sci. 21(3):21–43. DOI: 10.1109/TNS.1974.6499235.
  • Nashed, M., ed. (1976). Generalized Inverses and Applications. New York: Academic Press [Harcourt Brace Jovanovich Publishers], pp. xiv + 1054.
  • Lamperski, A. (2022). Neural network independence properties with applications to adaptive control. In: 2022 IEEE 61st Conference on Decision and Control (CDC). DOI: 10.1109/CDC51059.2022.9992994.
  • Deuflhard, P., Hohmann, A. (1991). Numerical Analysis. A First Course in Scientific Computation. Berlin: De Gruyter.
  • Barron, A. R., Cohen, A., Dahmen, W., DeVore, R. A. (2008). Approximation and learning by greedy algorithms. Ann. Stat. 36(1):64–94. DOI: 10.1214/009053607000000631.
  • Daubechies, I. (1992). Ten Lectures on Wavelets. Philadelphia, PA: SIAM, xx + 357. DOI: 10.1137/1.9781611970104.
  • Graps, A. (1995). An introduction to wavelets. IEEE Comput. Sci. Eng. 2(2):50–61. DOI: 10.1109/99.388960.
  • Louis, A., Maass, P., Rieder, A. (1998). Wavelets. Theorie und Anwendungen, 2nd ed. Stuttgart: Teubner.
  • Chui, C. (1992). An Introduction to Wavelets, Vol. 1. New York: Academic Press.
  • Shaham, U., Cloninger, A., Coifman, R. R. (2018). Provable approximation properties for deep neural networks. Appl. Comput. Harmon. Anal. 44(3):537–557. DOI: 10.1016/j.acha.2016.04.003.