Abstract
We apply Long Short-Term Memory (LSTM) recurrent neural networks to a large corpus of unprompted speech- the German part of the VERBMOBIL corpus. By training first on a fraction of the data, then retraining on another fraction, we both reduce time costs and significantly improve recognition rates. For comparison we show recognition rates of Hidden Markov Models (HMMs) on the same corpus, and provide a promising extrapolation for HMM-LSTM hybrids. © Springer-Verlag Berlin Heidelberg 2005.
Original language | English (US) |
---|---|
Title of host publication | Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) |
Pages | 575-581 |
Number of pages | 7 |
DOIs | |
State | Published - Dec 1 2005 |
Externally published | Yes |
ASJC Scopus subject areas
- Theoretical Computer Science
- General Computer Science