Multi-Source Diffusion Models for Simultaneous Music Generation and Separation
About
In this work, we define a diffusion-based generative model capable of both music synthesis and source separation by learning the score of the joint probability density of sources sharing a context. Alongside the classic total inference tasks (i.e., generating a mixture, separating the sources), we also introduce and experiment on the partial generation task of source imputation, where we generate a subset of the sources given the others (e.g., play a piano track that goes well with the drums). Additionally, we introduce a novel inference method for the separation task based on Dirac likelihood functions. We train our model on Slakh2100, a standard dataset for musical source separation, provide qualitative results in the generation settings, and showcase competitive quantitative results in the source separation setting. Our method is the first example of a single model that can handle both generation and separation tasks, thus representing a step toward general audio models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Vocal Source Separation | MUSDB HQ 18 | cSDR3.64 | 14 | |
| Audio Source Separation | Slakh2100 (test) | -- | 9 | |
| Multi-track music generation | MUSDB18 | CBS0.469 | 5 | |
| Multi-track music generation | Slakh2100 | CBS0.469 | 5 | |
| Multi-track music generation | Slakh2100 (test) | FAD6.55 | 5 | |
| Inner Track Rhythmic Stability | Slakh2100 | IRS (Bass)0.05 | 4 | |
| Multi-track Rhythmic Synchronization | Slakh2100 | CBS0.4694 | 4 | |
| Partial audio generation | SLACK2100 | sub-FAD (B)0.43 | 3 |