Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Imaginary Voice: Face-styled Diffusion Model for Text-to-Speech

About

The goal of this work is zero-shot text-to-speech synthesis, with speaking styles and voices learnt from facial characteristics. Inspired by the natural fact that people can imagine the voice of someone when they look at his or her face, we introduce a face-styled diffusion text-to-speech (TTS) model within a unified framework learnt from visible attributes, called Face-TTS. This is the first time that face images are used as a condition to train a TTS model. We jointly train cross-model biometrics and TTS models to preserve speaker identity between face images and generated speech segments. We also propose a speaker feature binding loss to enforce the similarity of the generated and the ground truth speech segments in speaker embedding space. Since the biometric information is extracted directly from the face image, our method does not require extra fine-tuning steps to generate speech from unseen and unheard speakers. We train and evaluate the model on the LRS3 dataset, an in-the-wild audio-visual corpus containing background noise and diverse speaking styles. The project page is https://facetts.github.io.

Jiyoung Lee, Joon Son Chung, Soo-Whan Chung• 2023

Related benchmarks

TaskDatasetResultRank
Movie DubbingV2C-Animation Dub denoise 2.0
Speaker Similarity51.98
12
Multi-speaker DubbingV2C-Animation Dub 1.0 (test)
Speaker Similarity (SPK-SIM)52.81
12
Multi-speaker DubbingGRID Dub 1.0 (test)
SPK-SIM (%)82.97
12
Video-to-Speech SynthesisGRID (test)
Sim-O0.42
11
Video-to-Speech SynthesisV2C-Animation
Sim-O9
11
Video-to-Speech SynthesisV2C Dub 3.0
MOS-S3.1
10
Movie DubbingGRID Dubbing Setting 2.0
LSE-C4.55
10
Movie DubbingGRID Dubbing Setting 1.0
LSE-C4.69
10
Video DubbingChem Setting 1.0 (test)
LSE-C1.98
8
Video DubbingChem Setting 2.0 1.0 (test)
LSE-C1.96
8
Showing 10 of 12 rows

Other info

Follow for update