Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Lip Reading: a comparison of models and an online application

About

The goal of this paper is to develop state-of-the-art models for lip reading -- visual speech recognition. We develop three architectures and compare their accuracy and training times: (i) a recurrent model using LSTMs; (ii) a fully convolutional model; and (iii) the recently proposed transformer model. The recurrent and fully convolutional models are trained with a Connectionist Temporal Classification loss and use an explicit language model for decoding, the transformer is a sequence-to-sequence model. Our best performing model improves the state-of-the-art word error rate on the challenging BBC-Oxford Lip Reading Sentences 2 (LRS2) benchmark dataset by over 20 percent. As a further contribution we investigate the fully convolutional model when used for online (real time) lip reading of continuous speech, and show that it achieves high performance with low latency.

Triantafyllos Afouras, Joon Son Chung, Andrew Zisserman• 2018

Related benchmarks

TaskDatasetResultRank
Visual Speech RecognitionLRS3 (test)
WER68.8
159
Visual-only Speech RecognitionLRS2 (test)
WER58.5
63
Showing 2 of 2 rows

Other info

Follow for update