Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

VoiceLDM: Text-to-Speech with Environmental Context

About

This paper presents VoiceLDM, a model designed to produce audio that accurately follows two distinct natural language text prompts: the description prompt and the content prompt. The former provides information about the overall environmental context of the audio, while the latter conveys the linguistic content. To achieve this, we adopt a text-to-audio (TTA) model based on latent diffusion models and extend its functionality to incorporate an additional content prompt as a conditional input. By utilizing pretrained contrastive language-audio pretraining (CLAP) and Whisper, VoiceLDM is trained on large amounts of real-world audio without manual annotations or transcriptions. Additionally, we employ dual classifier-free guidance to further enhance the controllability of VoiceLDM. Experimental results demonstrate that VoiceLDM is capable of generating plausible audio that aligns well with both input conditions, even surpassing the speech intelligibility of the ground truth audio on the AudioCaps test set. Furthermore, we explore the text-to-speech (TTS) and zero-shot text-to-audio capabilities of VoiceLDM and show that it achieves competitive results. Demos and code are available at https://voiceldm.github.io.

Yeonghyeon Lee, Inmo Yeon, Juhan Nam, Joon Son Chung• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Audio GenerationAudioCaps (test)
FAD10.28
138
Speech GenerationExpr
JointCLAP0.093
5
Speech GenerationAccent+
JointCLAP0.204
5
Description-based speech generationSC (test)
JointCLAP0.245
4
Description-based speech generationAC-filt (test)
JointCLAP0.449
4
Description-based speech generationAccent+ (test)
JointCLAP0.235
4
Description-based speech generationExpr (test)
JointCLAP0.06
4
Scene-aware visually driven speech synthesisVivid-210K (test)
WER9.23
3
Showing 8 of 8 rows

Other info

Follow for update