Squeezeformer: An Efficient Transformer for Automatic Speech Recognition
About
The recently proposed Conformer model has become the de facto backbone model for various downstream speech tasks based on its hybrid attention-convolution architecture that captures both local and global features. However, through a series of systematic studies, we find that the Conformer architecture's design choices are not optimal. After re-examining the design choices for both the macro and micro-architecture of Conformer, we propose Squeezeformer which consistently outperforms the state-of-the-art ASR models under the same training schemes. In particular, for the macro-architecture, Squeezeformer incorporates (i) the Temporal U-Net structure which reduces the cost of the multi-head attention modules on long sequences, and (ii) a simpler block structure of multi-head attention or convolution modules followed up by feed-forward module instead of the Macaron structure proposed in Conformer. Furthermore, for the micro-architecture, Squeezeformer (i) simplifies the activations in the convolutional block, (ii) removes redundant Layer Normalization operations, and (iii) incorporates an efficient depthwise down-sampling layer to efficiently sub-sample the input signal. Squeezeformer achieves state-of-the-art results of 7.5%, 6.5%, and 6.0% word-error-rate (WER) on LibriSpeech test-other without external language models, which are 3.1%, 1.4%, and 0.6% better than Conformer-CTC with the same number of FLOPs. Our code is open-sourced and available online.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | LibriSpeech (test-other) | WER5.97 | 966 | |
| Automatic Speech Recognition | LibriSpeech clean (test) | WER2.47 | 833 | |
| Automatic Speech Recognition | LibriSpeech (dev-other) | WER5.77 | 411 | |
| Automatic Speech Recognition | LibriSpeech (dev-clean) | WER (%)2.27 | 319 | |
| Automatic Speech Recognition | Librispeech (test-clean) | WER2.47 | 84 | |
| Long-form Transcription | Earnings-21 | WER38.09 | 26 | |
| Automated Speech Recognition | TED-LIUM V3 | WER23.5 | 26 | |
| Automatic Speech Recognition | Earnings-22 | WER53.44 | 25 | |
| Automatic Speech Recognition | Telephony | WER11.72 | 7 | |
| Automatic Speech Recognition | Reading | WER5.2 | 7 |