Selection dynamics for deep neural networks

Hailiang Liu, Peter A. Markowich

Research output: Contribution to journalArticlepeer-review

9 Scopus citations

Abstract

This paper presents a partial differential equation framework for deep residual neural networks and for the associated learning problem. This is done by carrying out the continuum limits of neural networks with respect to width and depth. We study the wellposedness, the large time solution behavior, and the characterization of the steady states of the forward problem. Several useful time-uniform estimates and stability/instability conditions are presented. We state and prove optimality conditions for the inverse deep learning problem, using standard variational calculus, the Hamilton-Jacobi-Bellmann equation and the Pontryagin maximum principle. This serves to establish a mathematical foundation for investigating the algorithmic and theoretical connections between neural networks, PDE theory, variational analysis, optimal control, and deep learning.
Original languageEnglish (US)
Pages (from-to)11540-11574
Number of pages35
JournalJournal of Differential Equations
Volume269
Issue number12
DOIs
StatePublished - Sep 21 2020

Fingerprint

Dive into the research topics of 'Selection dynamics for deep neural networks'. Together they form a unique fingerprint.

Cite this