Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Simple and Controllable Music Generation

About

We tackle the task of conditional music generation. We introduce MusicGen, a single Language Model (LM) that operates over several streams of compressed discrete music representation, i.e., tokens. Unlike prior work, MusicGen is comprised of a single-stage transformer LM together with efficient token interleaving patterns, which eliminates the need for cascading several models, e.g., hierarchically or upsampling. Following this approach, we demonstrate how MusicGen can generate high-quality samples, both mono and stereo, while being conditioned on textual description or melodic features, allowing better controls over the generated output. We conduct extensive empirical evaluation, considering both automatic and human studies, showing the proposed approach is superior to the evaluated baselines on a standard text-to-music benchmark. Through ablation studies, we shed light over the importance of each of the components comprising MusicGen. Music samples, code, and models are available at https://github.com/facebookresearch/audiocraft

Jade Copet, Felix Kreuk, Itai Gat, Tal Remez, David Kant, Gabriel Synnaeve, Yossi Adi, Alexandre D\'efossez• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-Audio GenerationAudioCaps (test)
FAD3.58
138
Text-to-Music GenerationMusicCaps (evaluation set)
FAD3.4
20
Text-to-Music GenerationMusicCaps
KLD1.22
11
Music GenerationMusicCaps
FAD3.8
11
Music GenerationMusicCaps (test)
FAD3.4
10
Text-to-Audio GenerationMusicCaps
FDopenl3197.1
10
Controllable Music GenerationTestB
TB41
9
Music GenerationFMACaps
FD22.61
9
Controllable Music GenerationFMACaps full-control variant (test)
TB26.76
9
Music GenerationTestB
FD35.54
9
Showing 10 of 33 rows

Other info

Code

Follow for update