GLOBALLY CONVERGENT MULTILEVEL TRAINING OF DEEP RESIDUAL NETWORKS

Alena Kopaničáková, Rolf Krause

Research output: Contribution to journalArticlepeer-review

5 Scopus citations

Abstract

We propose a globally convergent multilevel training method for deep residual networks (ResNets). The devised method can be seen as a novel variant of the recursive multilevel trustregion (RMTR) method, which operates in hybrid (stochastic-deterministic) settings by adaptively adjusting minibatch sizes during the training. The multilevel hierarchy and the transfer operators are constructed by exploiting a dynamical system's viewpoint, which interprets forward propagation through the ResNet as a forward Euler discretization of an initial value problem. In contrast to traditional training approaches, our novel RMTR method also incorporates curvature information on all levels of the multilevel hierarchy by means of the limited-memory SR1 method. The overall performance and the convergence properties of the our multilevel training method are numerically investigated using examples from the field of classification and regression.

Original languageEnglish (US)
Pages (from-to)S254-S280
JournalSIAM Journal on Scientific Computing
Volume45
Issue number3
DOIs
StatePublished - 2023

Keywords

  • deep residual networks
  • multilevel minimization
  • training algorithm
  • trustregion methods

ASJC Scopus subject areas

  • Computational Mathematics
  • Applied Mathematics

Fingerprint

Dive into the research topics of 'GLOBALLY CONVERGENT MULTILEVEL TRAINING OF DEEP RESIDUAL NETWORKS'. Together they form a unique fingerprint.

Cite this