Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition

About

Conformer-based models have become the dominant end-to-end architecture for speech processing tasks. With the objective of enhancing the conformer architecture for efficient training and inference, we carefully redesigned Conformer with a novel downsampling schema. The proposed model, named Fast Conformer(FC), is 2.8x faster than the original Conformer, supports scaling to Billion parameters without any changes to the core architecture and also achieves state-of-the-art accuracy on Automatic Speech Recognition benchmarks. To enable transcription of long-form speech up to 11 hours, we replaced global attention with limited context attention post-training, while also improving accuracy through fine-tuning with the addition of a global token. Fast Conformer, when combined with a Transformer decoder also outperforms the original Conformer in accuracy and in speed for Speech Translation and Spoken Language Understanding.

Dima Rekesh, Nithin Rao Koluguri, Samuel Kriman, Somshubra Majumdar, Vahid Noroozi, He Huang, Oleksii Hrinchuk, Krishna Puvvada, Ankur Kumar, Jagadeesh Balam, Boris Ginsburg• 2023

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech (test-other)
WER2.47
966
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.38
833
Speech RecognitionVoxPopuli (test)
WER5.39
37
Speech RecognitionWSJ nov92 (test)
WER1.42
34
Automatic Speech RecognitionAMI
WER15.7
28
Long-form TranscriptionEarnings-22
WER13.69
27
Automatic Speech RecognitionVoxPopuli
WER6.6
27
Automated Speech RecognitionTED-LIUM V3
WER3.54
26
Automatic Speech RecognitionEarnings-22
WER13.8
25
Automatic Speech RecognitionAMI (test)
Word Error Rate15.62
24
Showing 10 of 23 rows

Other info

Code

Follow for update