Highway and residual networks learn unrolled iterative estimation

Klaus Greff, Rupesh K. Srivastava, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

62 Scopus citations

Abstract

The past year saw the introduction of new architectures such as Highway networks (Srivastava et al., 2015a) and Residual networks (He et al., 2015) which, for the first time, enabled the training of feedforward networks with dozens to hundreds of layers using simple gradient descent. While depth of representation has been posited as a primary reason for their success, there are indications that these architectures defy a popular view of deep learning as a hierarchical computation of increasingly abstract features at each layer. In this report, we argue that this view is incomplete and does not adequately explain several recent findings. We propose an alternative viewpoint based on unrolled iterative estimation-a group of successive layers iteratively refine their estimates of the same features instead of computing an entirely new representation. We demonstrate that this viewpoint directly leads to the construction of Highway and Residual networks. Finally we provide preliminary experiments to discuss the similarities and differences between the two architectures.
Original languageEnglish (US)
Title of host publication5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings
PublisherInternational Conference on Learning Representations, ICLR
StatePublished - Jan 1 2017
Externally publishedYes

Fingerprint

Dive into the research topics of 'Highway and residual networks learn unrolled iterative estimation'. Together they form a unique fingerprint.

Cite this