Abstract
Recently, neural networks have been widely applied for solving partial differential equations (PDEs). Although such methods have been proven remarkably successful on practical engineering problems, they have not been shown, theoretically or empirically, to converge to the underlying PDE solution with arbitrarily high accuracy. The primary difficulty lies in solving the highly non-convex optimization problems resulting from the neural network discretization, which are difficult to treat both theoretically and practically. It is our goal in this work to take a step toward remedying this. For this purpose, we develop a novel greedy training algorithm for shallow neural networks. Our method is applicable to both the variational formulation of the PDE and also to the residual minimization formulation pioneered by physics informed neural networks (PINNs). We analyze the method and obtain a priori error bounds when solving PDEs from the function class defined by shallow networks, which rigorously establishes the convergence of the method as the network size increases. Finally, we test the algorithm on several benchmark examples, including high dimensional PDEs, to confirm the theoretical convergence rate. Although the method is expensive relative to traditional approaches such as finite element methods, we view this work as a proof of concept for neural network-based methods, which shows that numerical methods based upon neural networks can be shown to rigorously converge.
Original language | English (US) |
---|---|
Article number | 112084 |
Journal | Journal of Computational Physics |
Volume | 484 |
DOIs | |
State | Published - Jul 1 2023 |
Keywords
- Generalization accuracy
- Greedy algorithms
- Neural networks
- Partial differential equations
ASJC Scopus subject areas
- Numerical Analysis
- Modeling and Simulation
- Physics and Astronomy (miscellaneous)
- General Physics and Astronomy
- Computer Science Applications
- Computational Mathematics
- Applied Mathematics