Deep Audio-Visual Speech Recognition
About
The goal of this work is to recognise phrases and sentences being spoken by a talking face, with or without the audio. Unlike previous works that have focussed on recognising a limited number of words or phrases, we tackle lip reading as an open-world problem - unconstrained natural language sentences, and in the wild videos. Our key contributions are: (1) we compare two models for lip reading, one using a CTC loss, and the other using a sequence-to-sequence loss. Both models are built on top of the transformer self-attention architecture; (2) we investigate to what extent lip reading is complementary to audio speech recognition, especially when the audio signal is noisy; (3) we introduce and publicly release a new dataset for audio-visual speech recognition, LRS2-BBC, consisting of thousands of natural sentences from British television. The models that we train surpass the performance of all previous work on a lip reading benchmark dataset by a significant margin.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Speech Recognition | LRS3 (test) | WER7.2 | 159 | |
| Visual Speech Recognition | LRS3 High-Resource, 433h labelled v1 (test) | WER0.589 | 80 | |
| Audio-Visual Speech Recognition | LRS3 clean (test) | WER7.2 | 70 | |
| Visual-only Speech Recognition | LRS2 (test) | WER48.3 | 63 | |
| Visual Speech Recognition | LRS3 | WER0.589 | 59 | |
| Speech Recognition | LRS2 (test) | WER8.2 | 49 | |
| Automatic Speech Recognition | LRS3 (test) | WER (%)8.3 | 46 | |
| Visual Speech Recognition | LRS2 | Mean WER48.3 | 45 | |
| Audio-Visual Speech Recognition | LRS2 (test) | WER8.2 | 34 | |
| Lip-reading | LRS2 (test) | WER48.3 | 28 |