Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Conformers are All You Need for Visual Speech Recognition

About

Visual speech recognition models extract visual features in a hierarchical manner. At the lower level, there is a visual front-end with a limited temporal receptive field that processes the raw pixels depicting the lips or faces. At the higher level, there is an encoder that attends to the embeddings produced by the front-end over a large temporal receptive field. Previous work has focused on improving the visual front-end of the model to extract more useful features for speech recognition. Surprisingly, our work shows that complex visual front-ends are not necessary. Instead of allocating resources to a sophisticated visual front-end, we find that a linear visual front-end paired with a larger Conformer encoder results in lower latency, more efficient memory usage, and improved WER performance. We achieve a new state-of-the-art of 12.8% WER for visual speech recognition on the TED LRS3 dataset, which rivals the performance of audio-only models from just four years ago.

Oscar Chang, Hank Liao, Dmitriy Serdyuk, Ankit Shah, Olivier Siohan• 2023

Related benchmarks

TaskDatasetResultRank
Visual Speech RecognitionLRS3 (test)
WER12.8
159
Visual Speech RecognitionLRS3
WER0.128
59
Speech RecognitionLRS3-TED
WER12.8
25
Audio-Visual Speech RecognitionTED LRS3
WER0.009
10
Audio-visual diarizationMEET360
WER24.5
3
Showing 5 of 5 rows

Other info

Follow for update