Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Stemphonic: All-at-once Flexible Multi-stem Music Generation

About

Music stem generation, the task of producing musically-synchronized and isolated instrument audio clips, offers the potential of greater user control and better alignment with musician workflows compared to conventional text-to-music models. Existing stem generation approaches, however, either rely on fixed architectures that output a predefined set of stems in parallel, or generate only one stem at a time, resulting in slow inference despite flexibility in stem combination. We propose Stemphonic, a diffusion-/flow-based framework that overcomes this trade-off and generates a variable set of synchronized stems in one inference pass. During training, we treat each stem as a batch element, group synchronized stems in a batch, and apply a shared noise latent to each group. At inference-time, we use a shared initial noise latent and stem-specific text inputs to generate synchronized multi-stem outputs in one pass. We further expand our approach to enable one-pass conditional multi-stem generation and stem-wise activity controls to empower users to iteratively generate and orchestrate the temporal layering of a mix. We benchmark our results on multiple open-source stem evaluation sets and show that Stemphonic produces higher-quality outputs while accelerating the full mix generation process by 25 to 50%. Demos at: https://stemphonic-demo.vercel.app.

Shih-Lun Wu, Ge Zhu, Juan-Pablo Caceres, Cheng-Zhi Anna Huang, Nicholas J. Bryan• 2026

Related benchmarks

TaskDatasetResultRank
Multi-stem Music GenerationMoisesDB K=3 n=190
FADmix1.06
3
Multi-stem Music GenerationMoisesDB K=4 n=456
FADmix1.12
3
Multi-stem Music GenerationMoisesDB K=5 n=379
FADmix1.34
3
Multi-stem Music GenerationMoisesDB n=283 (K=6)
FADmix2.29
3
Showing 4 of 4 rows

Other info

Follow for update