Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Conformer: Convolution-augmented Transformer for Speech Recognition

About

Recently Transformer and Convolution neural network (CNN) based models have shown promising results in Automatic Speech Recognition (ASR), outperforming Recurrent neural networks (RNNs). Transformer models are good at capturing content-based global interactions, while CNNs exploit local features effectively. In this work, we achieve the best of both worlds by studying how to combine convolution neural networks and transformers to model both local and global dependencies of an audio sequence in a parameter-efficient way. To this regard, we propose the convolution-augmented transformer for speech recognition, named Conformer. Conformer significantly outperforms the previous Transformer and CNN based models achieving state-of-the-art accuracies. On the widely used LibriSpeech benchmark, our model achieves WER of 2.1%/4.3% without using a language model and 1.9%/3.9% with an external language model on test/testother. We also observe competitive performance of 2.7%/6.3% with a small model of only 10M parameters.

Anmol Gulati, James Qin, Chung-Cheng Chiu, Niki Parmar, Yu Zhang, Jiahui Yu, Wei Han, Shibo Wang, Zhengdong Zhang, Yonghui Wu, Ruoming Pang• 2020

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.84
1156
Automatic Speech RecognitionLibriSpeech (test-other)
WER1.9
1151
Automatic Speech RecognitionLibriSpeech (dev-other)
WER3.9
462
Automatic Speech RecognitionLibriSpeech (dev-clean)
WER (%)2.1
340
Automatic Speech RecognitionLibriSpeech 960h (test-other)
WER3.9
88
Automatic Speech RecognitionLibrispeech (test-clean)
WER2.1
84
Speech RecognitionLibriSpeech clean (dev)
WER0.019
80
Automatic Speech RecognitionWenetSpeech Meeting (test)
CER15.21
78
Automated Speech RecognitionTED-LIUM V3
WER24.03
77
Speech RecognitionLibriSpeech (test)
WER0.021
76
Showing 10 of 50 rows

Other info

Follow for update