Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mo\^usai: Text-to-Music Generation with Long-Context Latent Diffusion

About

Recent years have seen the rapid development of large generative models for text; however, much less research has explored the connection between text and another "language" of communication -- music. Music, much like text, can convey emotions, stories, and ideas, and has its own unique structure and syntax. In our work, we bridge text and music via a text-to-music generation model that is highly efficient, expressive, and can handle long-term structure. Specifically, we develop Mo\^usai, a cascading two-stage latent diffusion model that can generate multiple minutes of high-quality stereo music at 48kHz from textual descriptions. Moreover, our model features high efficiency, which enables real-time inference on a single consumer GPU with a reasonable speed. Through experiments and property analyses, we show our model's competence over a variety of criteria compared with existing music generation models. Lastly, to promote the open-source culture, we provide a collection of open-source libraries with the hope of facilitating future work in the field. We open-source the following: Codes: https://github.com/archinetai/audio-diffusion-pytorch; music samples for this paper: http://bit.ly/44ozWDH; all music samples for all models: https://bit.ly/audio-diffusion.

Flavio Schneider, Ojasv Kamal, Zhijing Jin, Bernhard Sch\"olkopf• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Music GenerationMusicCaps (evaluation set)
FAD7.5
20
Music GenerationMusicCaps
FAD7.5
11
Music GenerationMusicCaps (test)
FAD7.5
10
Music GenerationMELBench (test)
FAD9.13
7
Text-to-Music GenerationMusicCaps unbalanced (test)
FAD7.5
7
Text-to-Music GenerationMusicCaps genre-balanced (test)
T2M-QLT76.3
6
Showing 6 of 6 rows

Other info

Follow for update