Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Lip Reading Sentences in the Wild

About

The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) a 'Watch, Listen, Attend and Spell' (WLAS) network that learns to transcribe videos of mouth motion to characters; (2) a curriculum learning strategy to accelerate training and to reduce overfitting; (3) a 'Lip Reading Sentences' (LRS) dataset for visual speech recognition, consisting of over 100,000 natural sentences from British television. The WLAS model trained on the LRS dataset surpasses the performance of all previous work on standard lip reading benchmark datasets, often by a significant margin. This lip reading performance beats a professional lip reader on videos from BBC television, and we also demonstrate that visual information helps to improve speech recognition performance even when the audio is available.

Joon Son Chung, Andrew Senior, Oriol Vinyals, Andrew Zisserman• 2016

Related benchmarks

TaskDatasetResultRank
Visual-only Speech RecognitionLRS2 (test)
WER70.4
63
Speech RecognitionLRS2 (test)
WER70.4
49
Visual Speech RecognitionLRS2
Mean WER70.4
45
Lip-reading ClassificationLRW (test)
Accuracy76.2
38
Lip-readingLRS2 (test)
WER68.19
28
Automatic Speech RecognitionLRS2-BBC (test)
WER0.704
21
Lip-readingGRID (test)
WER3
18
Lip-readingLRW original (test)
Top-1 Accuracy76.2
14
Visual Speech RecognitionLRS2 v0.4 (test)
WER70.4
14
Word RecognitionLRW (test)
Correct Rate76.2
13
Showing 10 of 17 rows

Other info

Follow for update