Learning context sensitive languages with LSTM trained with Kalman filters

Felix A. Gers, Juan Antonio Pérez-Ortiz, Douglas Eck, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

Unlike traditional recurrent neural networks, the Long Short-Term Memory (LSTM) model generalizes well when presented with training sequences derived from regular and also simple nonregular languages. Our novel combination of LSTM and the decoupled extended Kalman filter, however, learns even faster and generalizes even better, requiring only the 10 shortest exemplars (n ≤ 10) of the context sensitive language anbncn to deal correctly with values of n up to 1000 and more. Even when we consider the relatively high update complexity per timestep, in many cases the hybrid offers faster learning than LSTM by itself. © Springer-Verlag Berlin Heidelberg 2002.
Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
PublisherSpringer Verlag
Pages655-660
Number of pages6
ISBN (Print)9783540440741
DOIs
StatePublished - Jan 1 2002
Externally publishedYes

Fingerprint

Dive into the research topics of 'Learning context sensitive languages with LSTM trained with Kalman filters'. Together they form a unique fingerprint.

Cite this