Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Let There Be Sound: Reconstructing High Quality Speech from Silent Videos

About

The goal of this work is to reconstruct high quality speech from lip motions alone, a task also known as lip-to-speech. A key challenge of lip-to-speech systems is the one-to-many mapping caused by (1) the existence of homophenes and (2) multiple speech variations, resulting in a mispronounced and over-smoothed speech. In this paper, we propose a novel lip-to-speech system that significantly improves the generation quality by alleviating the one-to-many mapping problem from multiple perspectives. Specifically, we incorporate (1) self-supervised speech representations to disambiguate homophenes, and (2) acoustic variance information to model diverse speech styles. Additionally, to better solve the aforementioned problem, we employ a flow based post-net which captures and refines the details of the generated speech. We perform extensive experiments on two datasets, and demonstrate that our method achieves the generation quality close to that of real human utterance, outperforming existing methods in terms of speech naturalness and intelligibility by a large margin. Synthesised samples are available at our demo page: https://mm.kaist.ac.kr/projects/LTBS.

Ji-Hoon Kim, Jaehun Kim, Joon Son Chung• 2023

Related benchmarks

TaskDatasetResultRank
Video-to-Speech SynthesisLRS3-TED (test)
UTMOS2.417
7
Video-to-Speech SynthesisLRS2-BBC (test)
UTMOS2.288
7
Video-to-Speech SynthesisLRS3 (test)
GE2E0.609
4
Showing 3 of 3 rows

Other info

Follow for update