TY - GEN
T1 - Solving Bayesian Inverse Problems via Variational Autoencoders
AU - Goh, Hwan
AU - Sheriffdeen, Sheroze
AU - Wittmer, Jonathan
AU - Bui-Thanh, Tan
N1 - KAUST Repository Item: Exported on 2023-07-25
Acknowledgements: This research was partially funded by the National Science Foundation awards NSF-1808576 and NSF-CAREER-1845799; by the Defense Thread Reduction Agency award DTRA-M1802962; by the Department of Energy award DE-SC0018147; by KAUST; by 2018 ConTex award; and by 2018 UT-Portugal CoLab award. The authors acknowledge the Texas Advanced Computing Center (TACC) at The University of Texas at Austin for providing HPC resources that have contributed to the research results reported within this paper. URL: http://www.tacc.utexas.edu. The authors would like to thank Jari Kaipio, Ruanui Nicholson and Rory Wittmer for the insightful discussions.
This publication acknowledges KAUST support, but has no KAUST affiliated authors.
PY - 2021/1/1
Y1 - 2021/1/1
N2 - In recent years, the field of machine learning has made phenomenal progress in the pursuit of simulating real-world data generation processes. One notable example of such success is the variational autoencoder (VAE). In this work, with a small shift in perspective, we leverage and adapt VAEs for a different purpose: uncertainty quantification (UQ) in scientific inverse problems. We introduce UQ-VAE: a flexible, adaptive, hybrid data/model-constrained framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown parameter of interest. Specifically, from divergence-based variational inference, our framework is derived such that most of the information usually present in scientific inverse problems is fully utilized in the training procedure. Additionally, this framework includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. This introduces more flexibility in controlling how optimization directs the learning of the posterior model. Further, this framework possesses an inherent adaptive optimization property that emerges through the learning of the posterior uncertainty. Numerical results for an elliptic PDE-constrained Bayesian inverse problem are provided to verify the proposed framework.
AB - In recent years, the field of machine learning has made phenomenal progress in the pursuit of simulating real-world data generation processes. One notable example of such success is the variational autoencoder (VAE). In this work, with a small shift in perspective, we leverage and adapt VAEs for a different purpose: uncertainty quantification (UQ) in scientific inverse problems. We introduce UQ-VAE: a flexible, adaptive, hybrid data/model-constrained framework for training neural networks capable of rapid modelling of the posterior distribution representing the unknown parameter of interest. Specifically, from divergence-based variational inference, our framework is derived such that most of the information usually present in scientific inverse problems is fully utilized in the training procedure. Additionally, this framework includes an adjustable hyperparameter that allows selection of the notion of distance between the posterior model and the target distribution. This introduces more flexibility in controlling how optimization directs the learning of the posterior model. Further, this framework possesses an inherent adaptive optimization property that emerges through the learning of the posterior uncertainty. Numerical results for an elliptic PDE-constrained Bayesian inverse problem are provided to verify the proposed framework.
UR - http://hdl.handle.net/10754/693188
UR - http://www.scopus.com/inward/record.url?scp=85141908956&partnerID=8YFLogxK
M3 - Conference contribution
SP - 386
EP - 425
BT - 2nd Mathematical and Scientific Machine Learning Conference, MSML 2021
PB - ML Research Press
ER -