Training very deep networks

Rupesh Kumar Srivastava, Klaus Greff, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

1222 Scopus citations

Abstract

Theoretical and empirical evidence indicates that the depth of neural networks is crucial for their success. However, training becomes more difficult as depth increases, and training of very deep networks remains an open problem. Here we introduce a new architecture designed to overcome this. Our so-called highway networks allow unimpeded information flow across many layers on information highways. They are inspired by Long Short-Term Memory recurrent networks and use adaptive gating units to regulate the information flow. Even with hundreds of layers, highway networks can be trained directly through simple gradient descent. This enables the study of extremely deep and efficient architectures.
Original languageEnglish (US)
Title of host publicationAdvances in Neural Information Processing Systems
PublisherNeural information processing systems foundation
Pages2377-2385
Number of pages9
StatePublished - Jan 1 2015
Externally publishedYes

Fingerprint

Dive into the research topics of 'Training very deep networks'. Together they form a unique fingerprint.

Cite this