Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving End-to-End Speech Recognition with Policy Learning

About

Connectionist temporal classification (CTC) is widely used for maximum likelihood learning in end-to-end speech recognition models. However, there is usually a disparity between the negative maximum likelihood and the performance metric used in speech recognition, e.g., word error rate (WER). This results in a mismatch between the objective function and metric during training. We show that the above problem can be mitigated by jointly training with maximum likelihood and policy gradient. In particular, with policy learning we are able to directly optimize on the (otherwise non-differentiable) performance metric. We show that joint training improves relative performance by 4% to 13% for our end-to-end model as compared to the same model learned through maximum likelihood. The model achieves 5.53% WER on Wall Street Journal dataset, and 5.42% and 14.70% on Librispeech test-clean and test-other set, respectively.

Yingbo Zhou, Caiming Xiong, Richard Socher• 2017

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER14.7
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER5.42
833
Automatic Speech RecognitionLibriSpeech (dev-other)
WER14.26
411
Automatic Speech RecognitionLibriSpeech (dev-clean)
WER (%)5.1
319
Speech RecognitionWSJ (92-eval)
WER4.67
131
Automatic Speech Recognition80-hour WSJ (dev93)
WER9.21
16
Speech RecognitionLibriSpeech clean 1000h (test)
WER0.0542
9
Showing 7 of 7 rows

Other info

Follow for update