Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improved Speech Reconstruction from Silent Video

About

Speechreading is the task of inferring phonetic information from visually observed articulatory facial movements, and is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible and natural-sounding acoustic speech signal from silent video frames of a speaking person. We train our model on speakers from the GRID and TCD-TIMIT datasets, and evaluate the quality and intelligibility of reconstructed speech using common objective measurements. We show that speech predictions from the proposed model attain scores which indicate significantly improved quality over existing models. In addition, we show promising results towards reconstructing speech from an unconstrained dictionary.

Ariel Ephrat, Tavi Halperin, Shmuel Peleg• 2017

Related benchmarks

TaskDatasetResultRank
Lip to SpeechLip2Wav unconstrained single-speaker 1.0
STOI0.192
15
Speech ReconstructionGRID (speaker-dependent)
STOI0.659
7
Lip to SpeechGRID unseen (test)
STOI0.659
5
Lip to SpeechTCD-TIMIT unseen (test)
STOI0.487
5
Lip-to-Speech SynthesisLip2Wav 1.0 (test)
Intelligibility1.34
5
Lip to SpeechLip2Wav unseen (test)
Mispronunciations43.3
3
Showing 6 of 6 rows

Other info

Follow for update