Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Self-Supervised Audio-Visual Speech Recognition

About

Audio-based automatic speech recognition (ASR) degrades significantly in noisy environments and is particularly vulnerable to interfering speech, as the model cannot determine which speaker to transcribe. Audio-visual speech recognition (AVSR) systems improve robustness by complementing the audio stream with the visual information that is invariant to noise and helps the model focus on the desired speaker. However, previous AVSR work focused solely on the supervised learning setup; hence the progress was hindered by the amount of labeled data available. In this work, we present a self-supervised AVSR framework built upon Audio-Visual HuBERT (AV-HuBERT), a state-of-the-art audio-visual speech representation learning model. On the largest available AVSR benchmark dataset LRS3, our approach outperforms prior state-of-the-art by ~50% (28.0% vs. 14.1%) using less than 10% of labeled data (433hr vs. 30hr) in the presence of babble noise, while reducing the WER of an audio-based model by over 75% (25.8% vs. 5.8%) on average.

Bowen Shi, Wei-Ning Hsu, Abdelrahman Mohamed• 2022

Related benchmarks

TaskDatasetResultRank
Visual Speech RecognitionLRS3 (test)
WER1.4
159
Audio-Visual Speech RecognitionLRS3 clean (test)
WER1.4
70
Visual Speech RecognitionLRS3
WER0.286
59
Audio-Visual Speech RecognitionLRS-3 Babble noise at 0dB SNR (test)
WER4.9
32
English TranscriptionLRS3 Noisy 0-SNR (test)
WER0.058
25
Audio-visual speech-to-text translationMuAViC (test)
BLEU (EL->EN)14.3
23
Automatic Speech RecognitionLRS3 Clean original (test)
WER1.6
21
Audio-Visual Speech RecognitionWildVSR (test)
WER0.487
12
Audio-Visual Speech RecognitionMuAViC Clean environment (test)
En Acc2
9
Audio-Visual Speech RecognitionMuAViC Noise environment (test)
Accuracy (En)39.3
9
Showing 10 of 24 rows

Other info

Code

Follow for update