Classifying unprompted speech by retraining LSTM nets

Nicole Beringer, Alex Graves, Florian Schiel, Jürgen Schmidhuber

Research output: Chapter in Book/Report/Conference proceedingConference contribution

3 Scopus citations

Abstract

We apply Long Short-Term Memory (LSTM) recurrent neural networks to a large corpus of unprompted speech- the German part of the VERBMOBIL corpus. By training first on a fraction of the data, then retraining on another fraction, we both reduce time costs and significantly improve recognition rates. For comparison we show recognition rates of Hidden Markov Models (HMMs) on the same corpus, and provide a promising extrapolation for HMM-LSTM hybrids. © Springer-Verlag Berlin Heidelberg 2005.
Original languageEnglish (US)
Title of host publicationLecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
Pages575-581
Number of pages7
DOIs
StatePublished - Dec 1 2005
Externally publishedYes

ASJC Scopus subject areas

  • Theoretical Computer Science
  • General Computer Science

Fingerprint

Dive into the research topics of 'Classifying unprompted speech by retraining LSTM nets'. Together they form a unique fingerprint.

Cite this