Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Video-Driven Speech Reconstruction using Generative Adversarial Networks

About

Speech is a means of communication which relies on both audio and visual information. The absence of one modality can often lead to confusion or misinterpretation of information. In this paper we present an end-to-end temporal model capable of directly synthesising audio from silent video, without needing to transform to-and-from intermediate features. Our proposed approach, based on GANs is capable of producing natural sounding, intelligible speech which is synchronised with the video. The performance of our model is evaluated on the GRID dataset for both speaker dependent and speaker independent scenarios. To the best of our knowledge this is the first method that maps video directly to raw audio and the first to produce intelligible speech when tested on previously unseen speakers. We evaluate the synthesised audio not only based on the sound quality but also on the accuracy of the spoken words.

Konstantinos Vougioukas, Pingchuan Ma, Stavros Petridis, Maja Pantic• 2019

Related benchmarks

TaskDatasetResultRank
Lip to SpeechLip2Wav unconstrained single-speaker 1.0
STOI0.251
15
Speech ReconstructionGRID (speaker-dependent)
STOI0.564
7
Lip to SpeechTCD-TIMIT unseen (test)
STOI0.511
5
Lip to SpeechGRID unseen (test)
STOI0.564
5
Lip-to-Speech SynthesisLip2Wav 1.0 (test)
Intelligibility1.56
5
Lip to SpeechLip2Wav unseen (test)
Mispronunciations36.6
3
Speech ReconstructionGRID (speaker-independent)
STOI0.445
3
Showing 7 of 7 rows

Other info

Follow for update