LiRA: Learning Visual Speech Representations from Audio through Self-supervision
About
The large amount of audiovisual content being shared online today has drawn substantial attention to the prospect of audiovisual self-supervised learning. Recent works have focused on each of these modalities separately, while others have attempted to model both simultaneously in a cross-modal fashion. However, comparatively little attention has been given to leveraging one modality as a training objective to learn from the other. In this work, we propose Learning visual speech Representations from Audio via self-supervision (LiRA). Specifically, we train a ResNet+Conformer model to predict acoustic features from unlabelled visual speech. We find that this pre-trained model can be leveraged towards word-level and sentence-level lip-reading through feature extraction and fine-tuning experiments. We show that our approach significantly outperforms other self-supervised methods on the Lip Reading in the Wild (LRW) dataset and achieves state-of-the-art performance on Lip Reading Sentences 2 (LRS2) using only a fraction of the total labelled data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Speech Recognition | LRS3 (test) | WER49.6 | 159 | |
| Visual-only Speech Recognition | LRS2 (test) | WER38.8 | 63 | |
| Visual Speech Recognition | LRS2 | Mean WER38.8 | 45 | |
| Lip-reading | LRW 1.0 (test) | Top-1 Accuracy88.1 | 37 | |
| Audio-Visual Speech Recognition | LRS2 (test) | WER3.7 | 34 | |
| Lip-reading | LRS2 (test) | WER39.1 | 28 | |
| Visual Speech Recognition | LRS3 high-resource (test) | WER49.6 | 16 | |
| Lip-reading | LRS3 (test) | WER43.3 | 8 |