Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Audio-Visual Speech Recognition

About

The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.

Triantafyllos Afouras, Joon Son Chung, Andrew Senior, Oriol Vinyals, Andrew Zisserman• 2018

Related benchmarks

TaskDatasetResultRank
Visual Speech RecognitionLRS3 (test)
WER7.2
159
Visual Speech RecognitionLRS3 High-Resource, 433h labelled v1 (test)
WER0.589
80
Audio-Visual Speech RecognitionLRS3 clean (test)
WER7.2
70
Visual-only Speech RecognitionLRS2 (test)
WER48.3
63
Visual Speech RecognitionLRS3
WER0.589
59
Speech RecognitionLRS2 (test)
WER8.2
49
Automatic Speech RecognitionLRS3 (test)
WER (%)8.3
46
Visual Speech RecognitionLRS2
Mean WER48.3
45
Audio-Visual Speech RecognitionLRS2 (test)
WER8.2
34
Lip-readingLRS2 (test)
WER48.3
28
Showing 10 of 30 rows

Other info

Follow for update