Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AudioLDM: Text-to-Audio Generation with Latent Diffusion Models

About

Text-to-audio (TTA) system has recently gained attention for its ability to synthesize general audio based on text descriptions. However, previous studies in TTA have limited generation quality with high computational costs. In this study, we propose AudioLDM, a TTA system that is built on a latent space to learn the continuous audio representations from contrastive language-audio pretraining (CLAP) latents. The pretrained CLAP models enable us to train LDMs with audio embedding while providing text embedding as a condition during sampling. By learning the latent representations of audio signals and their compositions without modeling the cross-modal relationship, AudioLDM is advantageous in both generation quality and computational efficiency. Trained on AudioCaps with a single GPU, AudioLDM achieves state-of-the-art TTA performance measured by both objective and subjective metrics (e.g., frechet distance). Moreover, AudioLDM is the first TTA system that enables various text-guided audio manipulations (e.g., style transfer) in a zero-shot fashion. Our implementation and demos are available at https://audioldm.github.io.

Haohe Liu, Zehua Chen, Yi Yuan, Xinhao Mei, Xubo Liu, Danilo Mandic, Wenwu Wang, Mark D. Plumbley• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Audio GenerationAudioCaps (test)
FAD1.52
138
Text-to-Music GenerationMusicCaps (evaluation set)
FAD3.2
20
Text-to-Audio GenerationClotho (test)
FID24.13
17
Text-to-Audio-Video GenerationVerse-Bench
MS0.05
16
Music GenerationMusicCaps
FAD2.29
11
Video-to-Audio GenerationVGGSound original (test)--
8
Text-to-Music GenerationMusicCaps unbalanced (test)
FAD2.3
7
Text-to-Image GenerationCOCO caption (val)
FID12.63
7
Text-to-Music GenerationMusicCaps genre-balanced (test)
T2M-QLT81.9
6
Text-to-AudioAudioBox
Clarity Score (CE)3.27
6
Showing 10 of 22 rows

Other info

Code

Follow for update