WhisperRT -- Turning Whisper into a Causal Streaming Model
About
Automatic Speech Recognition (ASR) has seen remarkable progress, with models like OpenAI Whisper and NVIDIA Canary achieving state-of-the-art (SOTA) performance in offline transcription. However, these models are not designed for streaming (online or real-time) transcription, due to limitations in their architecture and training methodology. We propose a method to turn the transformer encoder-decoder model into a low-latency streaming model. The encoder is made causal to process audio incrementally, while the decoder conditions on partial encoder states to generate tokens aligned with the available temporal context. This requires explicit synchronization between encoded input frames and token emissions. Since tokens are produced only after sufficient acoustic evidence is observed, an inherent latency arises, necessitating fine-tuning of the encoder-decoder alignment mechanism. We propose an updated inference mechanism that utilizes the fine-tuned causal encoder and decoder to yield greedy and beam-search decoding, and is shown to be locally optimal. Experiments on low-latency chunk sizes (less than 300 msec) show that our fine-tuned model outperforms existing non-fine-tuned streaming approaches in most cases, while using a lower complexity. We release our training and inference code, along with the fine-tuned models, to support further research and development in streaming ASR.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | LibriSpeech clean (test) | WER4.9 | 1156 | |
| Automatic Speech Recognition | LibriSpeech (test-other) | WER9.57 | 1151 | |
| Automatic Speech Recognition | TED-LIUM 3 | WER7.22 | 45 | |
| Automatic Speech Recognition | MLS FR (test) | WER14.23 | 13 | |
| Automatic Speech Recognition | MLS Spanish | Relative WER10.62 | 3 | |
| Automatic Speech Recognition | MLS German | Relative WER14.07 | 3 | |
| Automatic Speech Recognition | MLS Portuguese | Relative WER18.26 | 3 |