Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Vid2speech: Speech Reconstruction from Silent Video

About

Speechreading is a notoriously difficult task for humans to perform. In this paper we present an end-to-end model based on a convolutional neural network (CNN) for generating an intelligible acoustic speech signal from silent video frames of a speaking person. The proposed CNN generates sound features for each frame based on its neighboring frames. Waveforms are then synthesized from the learned speech features to produce intelligible speech. We show that by leveraging the automatic feature learning capabilities of a CNN, we can obtain state-of-the-art word intelligibility on the GRID dataset, and show promising results for learning out-of-vocabulary (OOV) words.

Ariel Ephrat, Shmuel Peleg• 2017

Related benchmarks

TaskDatasetResultRank
Speech ReconstructionGRID (speaker-dependent)
STOI0.491
7
Lip to SpeechTCD-TIMIT unseen (test)
STOI0.451
5
Lip to SpeechGRID unseen (test)
STOI0.491
5
Showing 3 of 3 rows

Other info

Follow for update