TY - JOUR
T1 - Meta-MgNet: Meta multigrid networks for solving parameterized partial differential equations
AU - Chen, Yuyan
AU - Dong, Bin
AU - Xu, Jinchao
N1 - Generated from Scopus record by KAUST IRTS on 2023-02-15
PY - 2022/4/15
Y1 - 2022/4/15
N2 - This paper studies numerical solutions for parameterized partial differential equations (PDEs) with deep learning. Parametrized PDEs arise in many important application areas, including design optimization, uncertainty analysis, optimal control, and inverse problems. The computational cost associated with these applications using traditional numerical schemes can be exorbitant, especially when the parameters fall into a particular range, and the underlying PDE model is required to be solved with high accuracy using a fine spatial-temporal mesh. Recently, solving PDEs with deep learning has become an emerging field in scientific computing. Existing works demonstrate great potentials of the deep learning-based approach in speeding up numerical solutions of various types of PDEs. However, there is still limited research on the deep learning approach for solving parameterized PDEs. If we directly apply existing deep supervised learning models to solving parameterized PDEs, the models need to be constantly fine-tuned or retrained when the parameters of the PDE change. This limits the applicability and utility of these models in practice. To resolve this issue, we propose a meta-learning based method that can efficiently solve parameterized PDEs with a wide range of parameters without retraining. Our key observation is to regard training a solver for the parameterized PDE with a given set of parameters as a learning task. Then, training a solver for the parameterized PDEs with varied parameters can be viewed as a multi-task learning problem, to which meta-learning is one of the most effective approaches. This new perspective can be applied to many existing PDE solvers to make them suitable for solving parameterized PDEs. As an example, we adopt the Multigrid Network (MgNet) [21] as the base solver. To achieve multi-task learning, we introduce a new hypernetwork, called Meta-NN, in MgNet and refer to the entire network as the Meta-MgNet. Meta-NN takes the differential operators and the right-hand-side of the underlying parameterized PDEs as inputs and generates appropriate smoothers for MgNet, which are essential ingredients for multigrid methods and can significantly affect the convergent speed. The proposed Meta-NN is carefully designed so that Meta-MgNet has guaranteed convergence for Poisson's equation. Finally, extensive numerical experiments demonstrate that Meta-MgNet is more efficient in solving parameterized PDEs than the MG methods and MgNet trained by supervised learning.
AB - This paper studies numerical solutions for parameterized partial differential equations (PDEs) with deep learning. Parametrized PDEs arise in many important application areas, including design optimization, uncertainty analysis, optimal control, and inverse problems. The computational cost associated with these applications using traditional numerical schemes can be exorbitant, especially when the parameters fall into a particular range, and the underlying PDE model is required to be solved with high accuracy using a fine spatial-temporal mesh. Recently, solving PDEs with deep learning has become an emerging field in scientific computing. Existing works demonstrate great potentials of the deep learning-based approach in speeding up numerical solutions of various types of PDEs. However, there is still limited research on the deep learning approach for solving parameterized PDEs. If we directly apply existing deep supervised learning models to solving parameterized PDEs, the models need to be constantly fine-tuned or retrained when the parameters of the PDE change. This limits the applicability and utility of these models in practice. To resolve this issue, we propose a meta-learning based method that can efficiently solve parameterized PDEs with a wide range of parameters without retraining. Our key observation is to regard training a solver for the parameterized PDE with a given set of parameters as a learning task. Then, training a solver for the parameterized PDEs with varied parameters can be viewed as a multi-task learning problem, to which meta-learning is one of the most effective approaches. This new perspective can be applied to many existing PDE solvers to make them suitable for solving parameterized PDEs. As an example, we adopt the Multigrid Network (MgNet) [21] as the base solver. To achieve multi-task learning, we introduce a new hypernetwork, called Meta-NN, in MgNet and refer to the entire network as the Meta-MgNet. Meta-NN takes the differential operators and the right-hand-side of the underlying parameterized PDEs as inputs and generates appropriate smoothers for MgNet, which are essential ingredients for multigrid methods and can significantly affect the convergent speed. The proposed Meta-NN is carefully designed so that Meta-MgNet has guaranteed convergence for Poisson's equation. Finally, extensive numerical experiments demonstrate that Meta-MgNet is more efficient in solving parameterized PDEs than the MG methods and MgNet trained by supervised learning.
UR - https://linkinghub.elsevier.com/retrieve/pii/S0021999122000584
UR - http://www.scopus.com/inward/record.url?scp=85123709744&partnerID=8YFLogxK
U2 - 10.1016/j.jcp.2022.110996
DO - 10.1016/j.jcp.2022.110996
M3 - Article
SN - 1090-2716
VL - 455
JO - Journal of Computational Physics
JF - Journal of Computational Physics
ER -