Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Vevo: Controllable Zero-Shot Voice Imitation with Self-Supervised Disentanglement

About

The imitation of voice, targeted on specific speech attributes such as timbre and speaking style, is crucial in speech generation. However, existing methods rely heavily on annotated data, and struggle with effectively disentangling timbre and style, leading to challenges in achieving controllable generation, especially in zero-shot scenarios. To address these issues, we propose Vevo, a versatile zero-shot voice imitation framework with controllable timbre and style. Vevo operates in two core stages: (1) Content-Style Modeling: Given either text or speech's content tokens as input, we utilize an autoregressive transformer to generate the content-style tokens, which is prompted by a style reference; (2) Acoustic Modeling: Given the content-style tokens as input, we employ a flow-matching transformer to produce acoustic representations, which is prompted by a timbre reference. To obtain the content and content-style tokens of speech, we design a fully self-supervised approach that progressively decouples the timbre, style, and linguistic content of speech. Specifically, we adopt VQ-VAE as the tokenizer for the continuous hidden features of HuBERT. We treat the vocabulary size of the VQ-VAE codebook as the information bottleneck, and adjust it carefully to obtain the disentangled speech representations. Solely self-supervised trained on 60K hours of audiobook speech data, without any fine-tuning on style-specific corpora, Vevo matches or surpasses existing methods in accent and emotion conversion tasks. Additionally, Vevo's effectiveness in zero-shot voice conversion and text-to-speech tasks further demonstrates its strong generalization and versatility. Audio samples are available at https://versavoice.github.io.

Xueyao Zhang, Xiaohui Zhang, Kainan Peng, Zhenyu Tang, Vimal Manohar, Yingru Liu, Jeff Hwang, Dangna Li, Yuhao Wang, Julian Chan, Yuan Huang, Zhizheng Wu, Mingbo Ma• 2025

Related benchmarks

TaskDatasetResultRank
Text-to-Speech SynthesisLibriTTS (CLEAN), LibriVox (NOISY), YouTube (WILD), and My Science Tutor (KIDS) (test)
MOS3.36
21
Emotion TransferESD, TIMIT, and CREMA-D Evaluation Suite (test)
SSST4.55
20
Speech ReconstructionSeedTTS en (test)
WER0.0304
18
Voice CloningSeed-TTS en (test)
WER2.53
16
Rhythm TransferESD, TIMIT, and CREMA-D Evaluation Suite (test)
SSST65
10
Zero-shot Voice ImitationSeedTTS vc-en (test)
UTMOS2.83
10
Voice ConversionSeedTTS VC English (test)
WER3.01
8
Singing Voice ConversionSVC English
WER24.05
8
Singing Voice ConversionChinese SVC
WER22.85
8
Voice ConversionSeedTTS VC Chinese (test)
WER4.06
8
Showing 10 of 23 rows

Other info

Follow for update