Whisper-Flamingo: Integrating Visual Features into Whisper for Audio-Visual Speech Recognition and Translation
About
Audio-Visual Speech Recognition (AVSR) uses lip-based video to improve performance in noise. Since videos are harder to obtain than audio, the video training data of AVSR models is usually limited to a few thousand hours. In contrast, speech models such as Whisper are trained with hundreds of thousands of hours of data, and thus learn a better speech-to-text decoder. The huge training data difference motivates us to adapt Whisper to handle video inputs. Inspired by Flamingo which injects visual features into language models, we propose Whisper-Flamingo which integrates visual features into the Whisper speech recognition and translation model with gated cross attention. Our models achieve state-of-the-art ASR WER (0.68%) and AVSR WER (0.76%) on LRS3, and state-of-the-art ASR WER (1.3%) and AVSR WER (1.4%) on LRS2. Audio-visual Whisper-Flamingo outperforms audio-only Whisper on English speech recognition and En-X translation for 6 languages in noisy conditions. Moreover, Whisper-Flamingo is versatile and conducts all of these tasks using one set of parameters, while prior methods are trained separately on each language.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Speech Recognition | LRS3 (test) | WER1.5 | 159 | |
| Audio-Visual Speech Recognition | LRS3 clean (test) | WER0.76 | 70 | |
| Speech Recognition | LRS2 (test) | WER1.3 | 49 | |
| Automatic Speech Recognition | LRS3 (test) | WER (%)2.3 | 46 | |
| Audio-Visual Speech Recognition | LRS2 (test) | WER1.4 | 34 | |
| Audio-Visual Speech Recognition | LRS-3 Babble noise at 0dB SNR (test) | WER5.6 | 32 | |
| English Transcription | LRS3 Noisy 0-SNR (test) | WER0.056 | 25 | |
| Automatic Speech Recognition | LRS3 Clean original (test) | -- | 21 | |
| Audio-Visual Speech Recognition | LRS3 (test) | WER0.9 | 18 | |
| Audio Speech Recognition | LRS3 | WER0.7 | 14 |