We present a Lipreading system, i.e. a speech recognition system using only visual features, which uses domain-adversarial training for speaker independence. Domain-adversarial training is integrated into the optimization of a lipreader based on a stack of feedforward and LSTM (Long Short-Term Memory) recurrent neural networks, yielding an end-to-end trainable system which only requires a very small number of frames of un-transcribed target data to substantially improve the recognition accuracy on the target speaker. On pairs of different source and target speakers, we achieve a relative accuracy improvement of around 40% with only 15 to 20 seconds of untranscribed target speech data. On multi-speaker training setups, the accuracy improvements are smaller but still substantial.
|Original language||English (US)|
|Title of host publication||Proceedings of the Annual Conference of the International Speech Communication Association, INTERSPEECH|
|Publisher||International Speech Communication Association4 Rue des Fauvettes - Lous TourilsBaixas66390|
|Number of pages||5|
|State||Published - Jan 1 2017|