Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visual Speech Recognition for Multiple Languages in the Wild

About

Visual speech recognition (VSR) aims to recognize the content of speech based on lip movements, without relying on the audio stream. Advances in deep learning and the availability of large audio-visual datasets have led to the development of much more accurate and robust VSR models than ever before. However, these advances are usually due to the larger training sets rather than the model design. Here we demonstrate that designing better models is equally as important as using larger training sets. We propose the addition of prediction-based auxiliary tasks to a VSR model, and highlight the importance of hyperparameter optimization and appropriate data augmentations. We show that such a model works for different languages and outperforms all previous methods trained on publicly available datasets by a large margin. It even outperforms models that were trained on non-publicly available datasets containing up to to 21 times more data. We show, furthermore, that using additional training data, even in other languages or with automatically generated transcriptions, results in further improvement.

Pingchuan Ma, Stavros Petridis, Maja Pantic• 2022

Related benchmarks

TaskDatasetResultRank
Visual Speech RecognitionLRS3 (test)
WER37.9
159
Visual-only Speech RecognitionLRS2 (test)
WER25.5
63
Visual Speech RecognitionLRS3
WER0.315
59
Visual Speech RecognitionLRS2
Mean WER25.5
45
Lip-readingLRW 1.0 (test)
Top-1 Accuracy92.9
37
Visual Speech RecognitionLRS3 low-resource (test)
WER34.7
20
Visual Speech RecognitionLRS3 high-resource (test)
WER31.5
16
Lip-readingLRS3 (test)
WER31.5
8
Visual Speech RecognitionCMLR
Best CER8
7
Video Speech RecognitionMultilingual TEDx-French (MTfr) (test)
Mean WER67
4
Showing 10 of 16 rows

Other info

Code

Follow for update