Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Quality-aware Masked Diffusion Transformer for Enhanced Music Generation

About

Text-to-music (TTM) generation, which converts textual descriptions into audio, opens up innovative avenues for multimedia creation. Achieving high quality and diversity in this process demands extensive, high-quality data, which are often scarce in available datasets. Most open-source datasets frequently suffer from issues like low-quality waveforms and low text-audio consistency, hindering the advancement of music generation models. To address these challenges, we propose a novel quality-aware training paradigm for generating high-quality, high-musicality music from large-scale, quality-imbalanced datasets. Additionally, by leveraging unique properties in the latent space of musical signals, we adapt and implement a masked diffusion transformer (MDT) model for the TTM task, showcasing its capacity for quality control and enhanced musicality. Furthermore, we introduce a three-stage caption refinement approach to address low-quality captions' issue. Experiments show state-of-the-art (SOTA) performance on benchmark datasets including MusicCaps and the Song-Describer Dataset with both objective and subjective metrics. Demo audio samples are available at https://qa-mdt.github.io/, code and pretrained checkpoints are open-sourced at https://github.com/ivcylc/OpenMusic.

Chang Li, Ruoyu Wang, Lijuan Liu, Jun Du, Yixuan Sun, Zilu Guo, Zhenrong Zhang, Yuan Jiang, Jianqing Gao, Feng Ma• 2024

Related benchmarks

TaskDatasetResultRank
Music GenerationMusicCaps
FAD1.65
11
Text-to-Music GenerationMusicCaps
KLD1.31
11
Music GenerationSong Describer Dataset
FAD1.01
9
Music GenerationSubjective Evaluation Set
Overlap (Po)3.27
5
Showing 4 of 4 rows

Other info

Code

Follow for update