Abstract
In this paper we investigate the solution of inverse problems with neural network ansatz functions with generalized decision functions. The relevant observation for this work is that such functions can approximate typical test cases, such as the Shepp-Logan phantom, better, than standard neural networks. Moreover, we show that the convergence analysis of numerical methods for solving inverse problems with shallow generalized neural network functions leads to more intuitive convergence conditions, than for deep affine linear neural networks.
1 Introduction
We consider an inverse problem, which consists in solving an operator equation (1.1) (1.1) where denotes an operator, mapping between function spaces and . For the numerical solution of Equationequation (1.1)(1.1) (1.1) an appropriate set of ansatz-functions has to be selected, on which an approximate solution of Equationequation (1.1)(1.1) (1.1) is calculated, for instance a function , which solves (1.2) (1.2) where is an approximation of . The history on the topic of approximating the solution of Equationequation (1.1)(1.1) (1.1) by discretization and regularization is long: For linear inverse problems this has been investigated for instance in [Citation1]. Much later on neural network ansatz functions have been used for solving Equationequation (1.1)(1.1) (1.1) ; see for instance [Citation2]. In this paper we investigate solving Equationequation (1.2)(1.2) (1.2) when is a set of neural network functions with generalized decision functions. In order to spot the difference to classical neural network functions we recall the definition of them first:
Definition 1.1
(Affine linear neural network functions). Let . Line vectors in and are denoted by
A shallow affine linear neural networks (ALNN) is a function (here m = n) (1.3) (1.3) with ; σ is an activation function, such as the sigmoid, ReLU or Softmax function. Moreover, (1.4) (1.4) denotes the according parameterization of . In this context the set of ALNNs is given by
More recently deep affine linear neural network functions (DNNs) have become popular: An -layer network looks as follows: (1.5) (1.5) where with and for all and all . Here Nl denotes the number of neurons in l-th internal layer and denote activation functions of each layer; they are used as maps by acting component-wise on vectors. Moreover, we denote the parametrizing vector by (1.6) (1.6) A shallow linear neural network can also be called a 3-layer network. The notation 3-layer network is consistent with the literature because input-and output are counted on top. Therefore, a general -layer network has only L internal layers.
We demonstrate exemplary below (see Section 3.2) that the convergence analysis of an iterative Gauss-Newton method for solving a linear inverse problem restricted to a set of deep neural network ansatz functions is involved (see Example 3.4), while the analysis with shallow neural networks ansatz functions leads to intuitive conditions (see also [Citation3]). Moreover, it is well-known in regularization theory that the quality of approximation of the solution of Equationequation (1.2)(1.2) (1.2) is depending on the approximation qualities of the set : Approximation of functions with shallow affine linear neural networks is a classical topic of machine learning and in approximation theory, see for instance [Citation4–11]. The central result concerns the universal approximation property. Later on the universal approximation property has been established for different classes of neural networks: Examples are dropout neural networks (see [Citation12, Citation13]), convolutional neural networks (CNN) (see for example [Citation14, Citation15]), recurrent neural networks (RNN) (see [Citation16, Citation17]), networks with random nodes (see [Citation18]), with random weights and biases (see [Citation19, Citation20]) and with fixed neural network topology (see [Citation21]), to name but a few.
In this paper we follow up on this topic and analyze neural network functions with decision functions, which are higher order polynomials, as specified now:
Definition 1.2
(Shallow generalized neural network function). Let . Moreover, let be vector-valued functions. Neural network functions associated to are defined by (1.7) (1.7)
Again we denote the parameterization vector, containing all coefficients, with and the set of all such functions with . We call (1.8) (1.8) the set of decision functions associated to Equationequation (1.7)(1.7) (1.7) . The composition of σ with a decision function is called neuron.
We discuss approximation properties of generalized neural networks (see Theorem 2.5), in particular neural network functions with decision functions, which are at most quadratic polynomials in . This idea is not completely new: In [Citation22] neural networks with parabolic decision functions have been applied in a number of applications. In [Citation23], the authors proposed neural networks with radial decision functions, and proved an universal approximation result for ReLU-activated deep quadratic networks.
Outline of this paper
The motivation for this paper is to solve Equationequation (1.2)(1.2) (1.2) on a set of neural network functions with iterative algorithms, such as for instance a Gauss-Newton method. Good approximation properties of functions of interest (such as the Shepp-Logan phantom) is provided with quadratic decision functions, which have been suggested already in [Citation9]. We concentrate on shallow neural networks because the resulting analysis of iterative methods for solving inverse problems is intuitive (see Section 3.2). We apply different approximation theorems (such as for instance the universal approximation theorem of [Citation6]) to cover all situations from Definition 2.1, below. Moreover, by constructing wavelet frames based on quadratic decision functions, we give an explicit convergence rate for shallow radial network approximations; see Section 4, with main result Theorem 4.6. In Appendix A the important conditions of the approximation to the identity (AtI) from [Citation24] used for proving the convergence rates in Section 4 are verified.
2 Examples of networks with generalized decision functions
The versatility of the network defined in Equationequation (1.7)(1.7) (1.7) is due to the flexibility of the functions to chose.
We give a few examples.
Definition 2.1
(Neural networks with generalized decision functions). We split them into three categories:
General quadratic neural networks (GQNN): Let
That is, is a graph of a quadratic function. Then Equationequation (1.7)(1.7) (1.7) reads as follows: (2.1) (2.1) denotes the parameterization of : (2.2) (2.2)
Note that in Equationequation (2.1)(2.1) (2.1) all parameters of the entries of the matrices Aj are parameters.
Constrained quadratic neural networks (CQNN): These are networks, where the entries of the matrices Aj, are constrained:
Let That is Aj = 0 for all . This set of CQNNs corresponds with the ALNNs defined in Equationequation (1.3)(1.3) (1.3) .
Let be chosen and fixed. We denote by MCNN the family of functions (2.3) (2.3) which is parameterized by the vector (2.4) (2.4)
In particular, choosing we get: (2.5) (2.5)
Radial quadratic neural networks (RQNN)s: For the argument in Equationequation (2.5)(2.5) (2.5) rewrites to (2.6) (2.6) with (2.7) (2.7)
We call the set of functions from Equationequation (2.5)(2.5) (2.5) , which satisfy (2.8) (2.8)
RQNNs, radial neural network functions, because the level sets of νj are circles. These are radial basis functions (see for instance [Citation25] for a general overview).
Sign based quadratic (SBQNN) and cubic neural networks (CUNN): Let m = n.
(SBQNN): Let (2.9) (2.9) or alternatively In the first case, and in the second case similarly, we obtain the family of functions (2.10) (2.10) We call these functions signed squared neural networks. Note, here with .
(CUNN): Let (2.11) (2.11) We obtain the family of functions (2.12) (2.12) We call these functions cubic neural networks. Again with .
Remark 1.
It is obvious that generalized quadratic neural network functions and matrix constrained neural networks are more versatile than affine linear works, and thus they satisfy the universal approximation property (see [Citation6]).
The constraint Equationequation (2.8)(2.8) (2.8) does not allow to use the universal approximation theorem in a straight forward manner. In fact we prove an approximation result indirectly via a convergence rates result (see Section 4).
Sign based quadratic and cubic neural networks satisfy the universal approximation property, which is proven below in Corollary 2.6 by reducing it to the classical result from [Citation6].
In order to prove the universal approximation property of SBQNNs and CUNNs we review the universal approximation theorem as formulated by [Citation6] first. It is based discriminatory properties of the functions σ.
Definition 2.2
(Discriminatory function). A function is called discriminatory (see [Citation6]) if every measure μ on , which satisfies implies that .
Example 2.3.
Note that every non-polynomial function is discriminatory (this follows from the results in [Citation8]). Therefore the choices of activation function in Definition 1.1 are discriminatory for the Lebesgue-measure.
With these basic concepts we are able to recall Cybenko’s universal approximation result.
Theorem 2.4.
[Citation6] Let be a continuous discriminatory function. Then, for every function and every , there exists a function (2.13) (2.13) satisfying
In the following we formulate and prove a modification of Cybenko’s universal approximation result [Citation6] for shallow generalized neural networks as introduced in Definition 1.2.
Theorem 2.5
(Generalized universal approximation theorem). Let be a continuous discriminatory function and assume that are injective (this in particular means that ) and continuous, respectively. We denote .
Then for every and every there exists some function satisfying
Proof.
We begin the proof by noting that since is injective, the inverse function on the range of is well-defined, and we write . The proof that is continuous relies on the fact that the domain of is compact, see for instance [Citation26, Chapter XI, Theorem 2.1]. Then applying the Tietze-Urysohn-Brouwer extension theorem (see [Citation27]) to the continuous function , we see that this function can be extended continuously to . This extension will be denoted by .
We apply Theorem 2.4 to conclude that there exist and such that which satisfies (2.14) (2.14)
Then, because maps into we conclude, in particular, that and
Therefore satisfy the claimed assertions. □
It is obvious that the full variability in is the key to bring our proof and the universal approximation theorem in context. That is, if wj, θj, are allowed to vary over , respectively. RQNNs are constrained to in Equationequation (2.7)(2.7) (2.7) and thus Theorem 2.5 does not apply. Interestingly Theorem 4.6 applies and allows to approximate functions in (even with rates).
Corollary 2.6
(Universal approximation properties of SBQNNs and CUNNs). Let the discriminatory function be Lipschitz continuous. All families of neural network functions from Definition 2.1 satisfy the universal approximation property on .
Proof.
The proof follows from Theorem 2.5 and noting that all our functions defined in Definition 2.1 are injective. □
3 Motivation
3.1 Motivation 1: the Shepp-Logan phantom
Almost all tomographic algorithms are tested with the Shepp-Logan phantom (see [Citation28]). It can be exactly represented with 10 characteristic functions on ellipses, and therefore it is much better approximated by a GQNN than with an ALNN or a DNN, respectively. This observation extends immediately to all function that contains “localized” features, because of the compactness of the characteristic sets. In this sense sparse approximations with ALNNs and DNNs would be more costly in practice.
3.2 Motivation 2: the Gauss-Newton iteration
Let us assume that is a linear operator. We consider the solution of Equationequation (1.2)(1.2) (1.2) to be an element of an ALNN or DNN, as defined in Definition 1.1. Therefore can be parameterized by a vector as in Equationequation (1.4)(1.4) (1.4) (for shallow linear neural networks) or Equationequation (1.6)(1.6) (1.6) (for deep neural networks). For GQNNs the adequate parameterization can be read out from Definition 2.1. In other words, we can write every searched for function via an operator which maps a parameter to a function in , i.e.
Therefore, we aim for solving the nonlinear operator equation (3.1) (3.1)
For ALNNs we have proven in [Citation3] that the Gauss-Newton method (see Equationequation (3.7)(3.7) (3.7) ) is locally, quadratically convergent under conditions, which guarantee that during the iteration the gradients of does not degenerate, i.e. that is a finite-dimensional manifold. Let us now calculate derivatives of the radial neural network operators (see Equationequation (2.5)(2.5) (2.5) ), which are the basis to prove that also the Gauss-Newton with RQNNs is convergent:
Example 3.1.
Let be continuous activation function, where all derivatives up to order 2 are uniformly bounded, i.e. in formulas (3.2) (3.2) such as the sigmoid function. Then, the derivatives of a radial neural network (RQNN) as in Equationequation (2.5)(2.5) (2.5) with respect to the coefficients of in Equationequation (2.2)(2.2) (2.2) are given by the following formulas, where νs is defined in Equationequation (2.6)(2.6) (2.6) :
Derivative with respect to αs, : (3.3) (3.3)
Derivative with respect to where : (3.4) (3.4)
Derivative with respect to θs where : (3.5) (3.5)
Derivative with respect : (3.6) (3.6) Note, that all the derivatives above are functions in .
For formulating a convergence result for the Gauss-Newton method (see Equationequation (3.7)(3.7) (3.7) ) we need to specify the following assumptions, which were postulated first in [Citation3] to prove local convergence of a Gauss-Newton method on a set of shallow affine linear network functions. Here we verify convergence of the Gauss-Newton method on a set of RQNNs, as defined in Equationequation (2.1)(2.1) (2.1) . The proof is completely analogous as in [Citation3].
Assumption 3.2.
be a linear, bounded operator with trivial nullspace and dense range.
is a shallow RQNN generated by a strictly monotonic activation function σ which satisfies Equationequation (3.2)(3.2) (3.2) , like a sigmoid function.
All derivatives in Equationequations (3.3)–Equation(3.6)(3.6) (3.6) are locally linearly independent functions.
Now, we recall a local convergence result:
Theorem 3.3
(Local convergence of Gauss-Newton method with RQNNs). Let be as in Equationequation (3.1)(3.1) (3.1) be the composition of a linear operator F and the RQNN network operator as defined in Equationequation (2.5)(2.5) (2.5) , which satisfy Assumption 3.2. Moreover, let , which is open, be the starting point of the Gauss-Newton iteration (3.7) (3.7) where denotes the Moore-Penrose inverse of .Footnote1
Moreover, let be a solution of Equationequation (3.1)(3.1) (3.1) , i.e. (3.8) (3.8)
Then the Gauss-Newton iterations are locally, that is if is sufficiently close to , and quadratically converging.
Remark 2.
The essential condition in Theorem 3.3 is that all derivatives of with respect to are linearly independent, which is a nontrivial research question. In [Citation3] we studied convergence of the Gauss-Newton method on a set of shallow affine linear neural networks, where the convergence result required linear independence of the derivatives specified in Equationequations (3.3)–Equation(3.5)(3.5) (3.5) . Here, for RQNNs, on top, linear independence of the 2nd order moment function Equationequation (3.6)(3.6) (3.6) needs to hold. Even linear independence of neural network functions (with derivatives) is still a challenging research topic (see [Citation30]).
Under our assumptions the ill–posedness of F does not affect convergence of the Gauss-Newton method. The generalized inverse completely annihilates F in the iteration. The catch however is, that the data has to be attained in a finite dimensional space spanned by neurons (that is Equationequation (3.8)(3.8) (3.8) holds). This assumption is in fact restrictive: Numerical tests show that convergence of Gauss-Newton methods becomes arbitrarily slow if it is violated. Here the ill-posedness of F enters. The proof of Theorem 3.3 is based on the affine covariant condition (see for instance [Citation31]).
In the following we show the complexity of the convergence condition of a Gauss-Newton method in the case when affine linear deep neural network functions are used for coding.
Example 3.4
(Convergence conditions of deep neural networks). We restrict our attention to the simple case n = 1. We consider a 4-layer DNN (consisting of two internal layers) with , which reads as follows: (3.9) (3.9)
Now, we calculate by chain rule . For this purpose we define
With this definition we rewrite Equationequation (3.9)(3.9) (3.9) to
Since
we therefore get (3.10) (3.10)
Note that is a function of x depending on the parameter , which contains as one component.
The complicating issue in a potential convergence analysis concerns the last identity, Equationequation (3.10)(3.10) (3.10) , where evaluations of at different arguments appear simultaneously, which makes it complicated to analyze and interpret linear independence of derivatives of with respect to the single elements of . Note that for proving convergence of the Gauss-Newton method, according to Equationequation (3.10)(3.10) (3.10) , we need to verify linear independence of then functions
In other word, the potential manifold of quadratic neural networks is extremely complex.
The conclusion of this section is that the analysis of iterative regularization methods, like a Gauss-Newton method, gets significantly more complicated if deep neural networks are used as ansatz functions. In contrast using neural network with higher order nodes (such as radial) results in transparent moment conditions. So the research question discussed in the following is whether shallow higher order neural networks have similar approximation properties than deep neural network functions, which would reveal clear benefits of such for analyzing iterative methods for solving ill–posed problems.
4 Convergence rates for universal approximation of RQNNs
In the following we prove convergence rates of RQNNs (as defined in Equationequation (2.5)(2.5) (2.5) ) in the -norm. To be precise we specify a subclass on RQNNs for which we prove convergence rates results. This is a much finer result than the standard universal approximation result, Theorem 2.5, since it operates on a subclass and provides convergence rates. However, it is the only approximation result so far, which we can provide. We recall that the constraints, and (see Equationequation (2.7)(2.7) (2.7) ), induce that the level-sets of the neurons are compact (circles) and not unbounded, as they are for shallow affine linear neurons. Therefore we require a different analysis (and different spaces) than in the most advanced and optimal convergence rates results for affine linear neural networks as for instance in [Citation10, Citation11]. In fact the analysis is closer to an analysis of compact wavelets.
The convergence rate will be expressed in the following norm:
Definition 4.1.
denotes the space of square integrable functions on , which satisfy [Citation32, (1.10)]
Here cg are the coefficients of an wavelet expansion and D is a countable, general set of wavelet functions. A discrete wavelet basis (see, for example, [Citation33–36]) is a (not necessarily orthonormal) basis of the space of square integrable functions, where the basis elements are given by dilating and translating a mother wavelet function.
Remark 3.
Notice that the notation does not refer to the common L1-function space of absolutely integrable functions and depends on the choice of the wavelet system. For more properties and details on this space see [Citation37, Remark 3.11].
Definition 4.2
(Circular scaling function and wavelets). Let r > 0 be a fixed constant and σ be a discriminatory function as defined in Definition 2.2 such that . Then let (4.1) (4.1) where Cn is a normalizing constant such that .
Then we define for the radial scaling functions and wavelets (4.2) (4.2)
Often is considered a parameter and we write synonymously (4.3) (4.3)
In particular, this notation means that and are considered solely functions of the variable .
We consider the following subset of RQNNs (4.4) (4.4)
This is different to standard universal approximation theorems, where an uncountable number of displacements are considered. Moreover, we define the discrete Wavelet space (4.5) (4.5)
According to Theorem A.3, in order to prove the following approximation result, we only require to verify that satisfies the conditions for symmetric AtI and the double Lipschitz condition (see Definition A.1), which originate from [Citation24, Def. 3.4].
Corollary 4.3.
is a frame and for every function there exists a linear combination of N elements of , denoted by fN, satisfying (4.6) (4.6)
For proving the approximation to identity, AtI, property of the functions , we use the following basic inequality.
Lemma 4.4.
Let be a twice differentiable function, which can be expressed in the following way:
Then the spectral norm of the Hessian of h can be estimated as follows:Footnote2 (4.7) (4.7)
Proof.
Since is a symmetric matrix, its operator norm is equal to its spectral radius, namely the largest absolute value of an eigenvalue. By routine calculation we can see that
Or in other words is an eigenvalue of C. Moreover, is a rank one matrix and thus the spectral values are 0 with multiplicity and . This in turn shows that the eigenvalues of the Hessian are (with multiplicity n – 1) and , which proves Equationequation (4.7)(4.7) (4.7) . □
In the following lemma, we will prove that the kernels are an AtI.
Lemma 4.5.
Let r > 0 be fixed. Suppose that the activation function is monotonically increasing and satisfies for the i-th derivative (i = 0, 1, 2) (4.8) (4.8)
Then the kernels as defined in Equationequation (4.2)(4.2) (4.2) form an AtI as defined in Definition A.1 that also satisfy the double Lipschitz condition Equationequation (A.4)(A.4) (A.4) .
Proof.
We verify the three conditions from Definition A.1 as well as Equationequation (A.4)(A.4) (A.4) . First of all, we note that (4.9) (4.9)
Verification of (i) in Definition A.1: Equationequations (4.1)(4.1) (4.1) and Equation4.8(4.1) (4.1) imply that (4.10) (4.10) Therefore Thus (i) in Definition A.1 holds with and and .
Verification of (ii) in Definition A.1 with and : Because σ is monotonically increasing it follows from Equationequation (4.1)(4.1) (4.1) and the fact that (see Equationequation (4.2)(4.2) (4.2) ) and Definition 4.2 that Then Equationequation (4.9)(4.9) (4.9) implies that From the definition of , it follows (4.11) (4.11) From the mean value theorem it therefore follows from Equationequation (4.11)(4.11) (4.11) and Equationequation (A.6)(A.6) (A.6) that (4.12) (4.12) Then application of Equationequation (A.6)(A.6) (A.6) and noting that gives Therefore (ii) is satisfied with , and .
Verification of (iii) in Definition A.1: From the definition of (see Equationequation (4.2)(4.2) (4.2) ) it follows that for every and
Verification of the double Lipschitz condition Equationequation (A.4)(A.4) (A.4) in Definition A.1: By using the integral version of the mean value theorem, we have Following this identity, we get (4.13) (4.13) where and denotes again the spectral norm of .
Now, we estimate the right hand side of Equationequation (4.13)(4.13) (4.13) : From the definition of , Equationequation (4.2)(4.2) (4.2) , and the definition of , Equationequation (4.1)(4.1) (4.1) , it follows with the abbreviation :
Applications of Lemma 4.4 with and shows that (note that ) (4.14) (4.14)
Thus from Equationequation (4.9)(4.9) (4.9) it follows that
In the next step we note that from Equationequation (A.7)(A.7) (A.7) it follows that (4.15) (4.15)
Thus we get because
Therefore (iii) is satisfied with , and .□
We have shown with Lemma 4.5 that the functions form an AtI, which satisfies the double Lipschitz condition (see Definition A.1). Moreover, for activations functions, like the sigmoid function, Equationequation (4.8)(4.8) (4.8) it follows that (4.16) (4.16) which shows Equationequation (3.2)(3.2) (3.2) . The limit result is an easy consequence of the fact that and its derivative are faster decaying to zero than the functions . Therefore, it follows from Theorem A.3 that the set is a wavelet frame and satisfies the estimate Equationequation (A.14)(A.14) (A.14) . We summarize this important result:
Theorem 4.6
(-convergence of radial wavelets). Let σ be an activation function that satisfies the conditions in Lemma 4.5. Then, for every function (see Definition 4.1 for a definition of the space) and every , there exists a function
and denotes linear combinations of at most N terms in the set, such that (4.17) (4.17)
Using Theorem 4.6 we are able to formulate the main result of this paper:
Corollary 4.7
(-convergence of RQNNs). Let the assumptions of Theorem 4.6 be satisfied. Then, for every function (see Definition 4.1) and every , there exists a parametrization vector such that (4.18) (4.18) where is a RQNN from Equationequation (2.5)(2.5) (2.5) .
Proof.
From Theorem 4.6 it follows that there exists , which satisfies Equationequation (4.17)(4.17) (4.17) . By definition, where denotes the characteristic function of a set of indices of size N. Using the definition of we get from Equationequation (4.2)(4.2) (4.2) and Equationequation (4.1)(4.1) (4.1)
The arguments where σ is evaluated are in general different, and thus we get from which the coefficients can be read out. Note that for the sake of simplicity we have not explicitly stated the exact index in the subscript of the coefficients , or in other words we use the formal relation to indicate the dependencies. This shows the claim if we identify and
□
Remark 4.
Our results in Section 4 are proved using “similar” methods as in [Citation37], but can not be used directly. Because the main ingredient of the proof is the construction of localizing functions as wavelets. This task requires two layers of affine linear decision functions to accomplish, since the linear combinations of functions of the form are not localized; they cannot satisfy (iii) in Definition A.1, as they are not integrable on . In comparison, some quadratic decision functions—such as those used in Section 4—naturally gives localized functions, and therefore can work as wavelets on their own.
We mention that there exists a zoo of approximation rates results for neural network functions (see for instance [Citation10, Citation11], where best-approximation results in dependence of the space-dimension n have been proven). We have based our analysis on convergence rates results of [Citation37] and [Citation24]. This choice is motivated by the fact that the AtI property from [Citation24] is universal, and allows a natural extension of their results to quadratic neural network functions. The motivation here is not to give the optimal convergence rates result for quadratic neural networks, but to verify that convergence rates results are possible. We believe that AtI is a very flexible and elegant tool.
5 Conclusion
In this paper we studied generalized neural networks functions for solving inverse problems. We proved a universal approximation theorem and we have proven convergence rates of radial neural networks, which are the same order as the classical affine linear neural network functions, but with vastly reduced numbers of elements (in particular it spares one layer). The paper presents a proof of concept and thus we restrict attention only to radial neural networks although generalizations to higher order neural networks (such as cubic) is quite straightforward.
We also make the remark about the convergence of the actual training process of a neural network, i.e. optimization of the parameters of each decision function. This is usually done by gradient descent or Gauss-Newton-type methods. From the analysis in [Citation3] the convergence conditions of Newton’s method become very complicated for DNNs, but are transparent for quadratic neural networks, which is fact a matter of the complicated chain rule, when differentiating DNNs with respect to the parameters. For such an analysis there is a clear preference to go higher with the degree of polynomials than increasing the number of layers.
Acknowledgments
The authors would like to thank some referees for their valuable suggestions and their patience.
Additional information
Funding
Notes
1 For a detailed exposition on generalized inverses see [Citation29].
2 In the following and (without subscripts) always denote derivatives with respect to an n-dimensional variable such as . and denotes derivatives of a one-dimensional function.
References
- Natterer, F. (1977). Regularisierung schlecht gestellter Probleme durch Projektionsverfahren. Numerische Mathematik 28(3):329–341. DOI: 10.1007/BF01389972.
- Obmann, D., Schwab, J., Haltmeier, M. (2021). Deep synthesis network for regularizing inverse problems. Inverse Problems 37(1):015005. DOI: 10.1088/1361-6420/abc7cd.
- Scherzer, O., Hofmann, B., Nashed, Z. (2023). Gauss–Newton method for solving linear inverse problems with neural network coders. Sampling Theory Signal Process. Data Anal. 21(2):25. DOI: 10.1007/s43670-023-00066-6.
- Pinkus, A. (1999). Approximation theory of the MLP model in neural networks. Acta Numerica 8:143–195. DOI: 10.1017/S0962492900002919.
- Barron, A. R. (1993). Universal approximation bounds for superpositions of a sigmoidal function. IEEE Trans. Inf. Theory 39(3):930–945. DOI: 10.1109/18.256500.
- Cybenko, G. (1989). Approximation by superpositions of a sigmoidal function. Math. Control Signals Syst. 2(4):303–314. DOI: 10.1007/BF02551274.
- Hornik, K., Stinchcombe, M., White, H. (1989). Multilayer feedforward networks are universal approximators. Neural Netw. 2(5):359–366. DOI: 10.1016/0893-6080(89)90020-8.
- Leshno, M., Lin, V. Y., Pinkus, A., Schocken, S. (1993). Multilayer feedforward networks with a nonpolynomial activation function can approximate any function. Neural Netw. 6(6):861–867. DOI: 10.1016/s0893-6080(05)80131-5.
- Mhaskar, H. N. (1993). Approximation properties of a multilayered feedforward artificial neural network. Adv. Comput. Math. 1(1):61–80. DOI: 10.1007/BF02070821.
- Siegel, J. W., Xu, J. (2022). Sharp bounds on the approximation rates, metric entropy, and n-Widths of shallow neural networks. Found. Comput. Math. DOI: 10.1007/s10208-022-09595-3.
- Siegel, J. W., Xu, J. (2023). Characterization of the variation spaces corresponding to shallow neural networks. Constr. Approx. 57(3):1109–1132. DOI: 10.1007/s00365-023-09626-4.
- Wager, S., Wang, S., Liang, P. (2013). Dropout training as adaptive regularization. In: Proceedings of the 26th International Conference on Neural Information Processing Systems - Volume 1. Curran Associates Inc., pp. 351–359.
- Manita, O., Peletier, M., Portegies, J., Sanders, J., Senen-Cerda, A. (2022). Universal approximation in dropout neural networks. J. Mach. Learn. Res. 23(19):1–46.
- Zhou, D.-X. (2018). Deep distributed convolutional neural networks: universality. Anal. Appl. 16(6):895–919. DOI: 10.1142/S0219530518500124.
- Zhou, D.-X. (2020). Universality of deep convolutional neural networks. Appl. Comput. Harmon. Anal. 48(2):787–794. DOI: 10.1016/j.acha.2019.06.004.
- Schäfer, A. M., Zimmermann, H.-G. (2007). Recurrent neural networks are universal approximators. Int. J. Neural Syst. 17(4):253–263. DOI: 10.1142/s0129065707001111.
- Hammer, B. (2000). Learning with Recurrent Neural Networks. Lecture Notes in Control and Information Sciences. London: Springer.
- White. (1989). An additional hidden unit test for neglected nonlinearity in multilayer feedforward networks. In: International Joint Conference on Neural Networks. DOI: 10.1109/ijcnn.1989.118281.
- Pao, Y.-H., Park, G.-H., Sobajic, D. J. (1994). Learning and generalization characteristics of the random vector functional-link net. Neurocomputing 6(2):163–180. DOI: 10.1016/0925-2312(94)90053-1.
- Igelnik, B., Pao, Y.-H. (1995). Stochastic choice of basis functions in adaptive function approximation and the functional-link net. IEEE Trans. Neural Netw. 6(6):1320–1329. DOI: 10.1109/72.471375.
- Gelenbe, E., Mao, Z.-W., Li, Y.-D. (1999). Approximation by random networks with bounded number of layers. In: Neural Networks for Signal Processing IX: Proceedings of the 1999 IEEE Signal Processing Society Workshop (Cat. No.98TH8468). DOI: 10.1109/nnsp.1999.788135.
- Tsapanos, N., Tefas, A., Nikolaidis, N., Pitas, I. (2019). Neurons With paraboloid decision boundaries for improved neural network classification performance. IEEE Trans Neural Netw Learn Syst 30(1):284–294. DOI: 10.1109/tnnls.2018.2839655.
- Fan, F., Xiong, J., Wang, G. (2020). Universal approximation with quadratic deep networks. Neural Netw. 124:383–392. DOI: 10.1016/j.neunet.2020.01.007.
- Deng, D., Han, Y. (2009). Harmonic Analysis on Spaces of Homogeneous Type. Berlin, Heidelberg: Springer. DOI: 10.1007/978-3-540-88745-4.
- Buhmann, M. D. (2003). Radial Basis Functions. Cambridge: Cambridge University Press. DOI: 10.1017/cbo9780511543241.
- Dugundji, J. (1978). Topology, 2nd ed. Boston, MA: Allyn and Bacon, Inc.
- Kelley, J. L. (1955). General Topology. Toronto-New York-London: D. Van Nostrand Company.
- Shepp, L. A., Logan, B. F. (1974). The Fourier reconstruction of a head section. IEEE Trans. Nucl. Sci. 21(3):21–43. DOI: 10.1109/TNS.1974.6499235.
- Nashed, M., ed. (1976). Generalized Inverses and Applications. New York: Academic Press [Harcourt Brace Jovanovich Publishers], pp. xiv + 1054.
- Lamperski, A. (2022). Neural network independence properties with applications to adaptive control. In: 2022 IEEE 61st Conference on Decision and Control (CDC). DOI: 10.1109/CDC51059.2022.9992994.
- Deuflhard, P., Hohmann, A. (1991). Numerical Analysis. A First Course in Scientific Computation. Berlin: De Gruyter.
- Barron, A. R., Cohen, A., Dahmen, W., DeVore, R. A. (2008). Approximation and learning by greedy algorithms. Ann. Stat. 36(1):64–94. DOI: 10.1214/009053607000000631.
- Daubechies, I. (1992). Ten Lectures on Wavelets. Philadelphia, PA: SIAM, xx + 357. DOI: 10.1137/1.9781611970104.
- Graps, A. (1995). An introduction to wavelets. IEEE Comput. Sci. Eng. 2(2):50–61. DOI: 10.1109/99.388960.
- Louis, A., Maass, P., Rieder, A. (1998). Wavelets. Theorie und Anwendungen, 2nd ed. Stuttgart: Teubner.
- Chui, C. (1992). An Introduction to Wavelets, Vol. 1. New York: Academic Press.
- Shaham, U., Cloninger, A., Coifman, R. R. (2018). Provable approximation properties for deep neural networks. Appl. Comput. Harmon. Anal. 44(3):537–557. DOI: 10.1016/j.acha.2016.04.003.
A
Approximation to the identity (AtI)
Definition A.
1 (Approximation to the identity [Citation24]). A sequence of symmetric kernel functions is said to be an approximation to the identity (AtI) if there exist a quintuple of positive numbers satisfying the additional constraints (A.1) (A.1) the following three conditions are satisfied for all :
for all ;
for all triples which satisfy (A.2) (A.2)
for all .
Moreover, we say that the AtI satisfies the double Lipschitz condition if there exist a triple of positive constants satisfying (A.3) (A.3) such that for all (A.4) (A.4) for all quadruples which satisfy (A.5) (A.5)
The conditions (ii) and Equationequation (A.5)(A.5) (A.5) are essential for our analysis. We characterize now geometric properties of these constrained sets:
Lemma A.2.
Let and .
Then set of triples which satisfy Equationequation (A.2)(A.2) (A.2) and for which and all satisfy (A.6) (A.6)
The set of quadrupels which satisfy Equationequation (A.5)(A.5) (A.5) and for which satisfy (A.7) (A.7)
Proof.
With the concrete choice of parameters CA, Equationequation (A.2)(A.2) (A.2) reads as follows (A.8) (A.8) Since we assume that it follows from Equationequation (A.8)(A.8) (A.8) that In particular .We apply Jensen’s inequality, which states that for (A.9) (A.9) We use and , which then (along with the triangle inequality) gives In other words, it follows from Equationequation (A.8)(A.8) (A.8) that
With the concrete choice of parameters Equationequation (A.5)(A.5) (A.5) reads as follows (A.10) (A.10) Since we assume that it follows from Equationequation (A.10)(A.10) (A.10) that This in particular shows that We apply Jensen’s inequality, which states that for (A.11) (A.11) We use and , which then (along with the triangle inequality) gives In other words, it follows from Equationequation (A.10)(A.10) (A.10) that
□
The approximation to the identity in Definition A.1 can be used to construct wavelet frames that can approximate arbitrary functions in , as shown in the theorem below.
Remark 5.
When , Equationequation (A.6)(A.6) (A.6) also holds since the right hand side of Equationequation (A.6)(A.6) (A.6) is negative, so this inequality is trivial.
Theorem A.
3 ([Citation37]). Let be a symmetric AtI which satisfies the double Lipschitz condition (see Equationequation (A.4)(A.4) (A.4) ). Let (A.12) (A.12)
Then the set of functions (A.13) (A.13) is a frame and for every function there exists a linear combination of N elements of , denoted by fN, satisfying (A.14) (A.14)