Unsupervised learning in LSTM recurrent neural networks

Magdalena Klapper-Rybicka, Nicol N. Schraudolph, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

21 Scopus citations

Abstract

While much work has been done on unsupervised learning in feed forward neural network architectures, its potential with (theoretically more powerful) recurrent networks and time-varying inputs has rarely been explored. Here we train Long Short-Term Memory (LSTM) recurrent networks to maximize two information-theoretic objectives for unsupervised learning: Binary Information Gain Optimization (BINGO) and Nonparametric Entropy Optimization (NEO). LSTM learns to discriminate different types of temporal sequences and group them according to a variety of features.
Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlagservice@springer.de
Pages684-691
Number of pages8
ISBN (Print)3540424865
DOIs
StatePublished - Jan 1 2001
Externally publishedYes

Fingerprint

Dive into the research topics of 'Unsupervised learning in LSTM recurrent neural networks'. Together they form a unique fingerprint.

Cite this