Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploring Neural Transducers for End-to-End Speech Recognition

About

In this work, we perform an empirical comparison among the CTC, RNN-Transducer, and attention-based Seq2Seq models for end-to-end speech recognition. We show that, without any language model, Seq2Seq and RNN-Transducer models both outperform the best reported CTC models with a language model, on the popular Hub5'00 benchmark. On our internal diverse dataset, these trends continue - RNNTransducer models rescored with a language model after beam search outperform our best CTC models. These results simplify the speech recognition pipeline so that decoding can now be expressed purely as neural network operations. We also study how the choice of encoder architecture affects the performance of the three models - when all encoder layers are forward only, and when encoders downsample the input representation aggressively.

Eric Battenberg, Jitong Chen, Rewon Child, Adam Coates, Yashesh Gaur, Yi Li, Hairong Liu, Sanjeev Satheesh, David Seetapun, Anuroop Sriram, Zhenyao Zhu• 2017

Related benchmarks

TaskDatasetResultRank
Speech RecognitionHub5'00
SWB Score8.1
25
Automatic Speech RecognitionHub5 2000 (SWB)
WER8.1
21
Automatic Speech RecognitionEval2000-CH Fisher-Switchboard 2300-h (test)--
10
Automatic Speech RecognitionEval2000 Fisher-Switchboard 2300-h (test)
WER12.8
9
Showing 4 of 4 rows

Other info

Follow for update