Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ContextNet: Improving Convolutional Neural Networks for Automatic Speech Recognition with Global Context

About

Convolutional neural networks (CNN) have shown promising results for end-to-end speech recognition, albeit still behind other state-of-the-art methods in performance. In this paper, we study how to bridge this gap and go beyond with a novel CNN-RNN-transducer architecture, which we call ContextNet. ContextNet features a fully convolutional encoder that incorporates global context information into convolution layers by adding squeeze-and-excitation modules. In addition, we propose a simple scaling method that scales the widths of ContextNet that achieves good trade-off between computation and accuracy. We demonstrate that on the widely used LibriSpeech benchmark, ContextNet achieves a word error rate (WER) of 2.1%/4.6% without external language model (LM), 1.9%/4.1% with LM and 2.9%/7.0% with only 10M parameters on the clean/noisy LibriSpeech test sets. This compares to the previous best published system of 2.0%/4.6% with LM and 3.9%/11.3% with 20M parameters. The superiority of the proposed ContextNet model is also verified on a much larger internal dataset.

Wei Han, Zhengdong Zhang, Yu Zhang, Jiahui Yu, Chung-Cheng Chiu, James Qin, Anmol Gulati, Ruoming Pang, Yonghui Wu• 2020

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER4.1
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.9
833
Automatic Speech RecognitionLibriSpeech (dev-other)
WER3.9
411
Automatic Speech RecognitionLibriSpeech (dev-clean)
WER (%)1.9
319
Automatic Speech RecognitionLibriSpeech 960h (test-other)
WER4.1
81
Speech RecognitionLibriSpeech clean (dev)
WER0.02
59
Speech RecognitionLibriSpeech (test)--
59
Automatic Speech RecognitionLibriSpeech 960h (test-clean)
WER0.019
53
Speech RecognitionYouTube (test)
WER8.2
10
Showing 9 of 9 rows

Other info

Follow for update