SoloAudio: Target Sound Extraction with Language-oriented Audio Diffusion Transformer
About
In this paper, we introduce SoloAudio, a novel diffusion-based generative model for target sound extraction (TSE). Our approach trains latent diffusion models on audio, replacing the previous U-Net backbone with a skip-connected Transformer that operates on latent features. SoloAudio supports both audio-oriented and language-oriented TSE by utilizing a CLAP model as the feature extractor for target sounds. Furthermore, SoloAudio leverages synthetic audio generated by state-of-the-art text-to-audio models for training, demonstrating strong generalization to out-of-domain data and unseen sound events. We evaluate this approach on the FSD Kaggle 2018 mixture dataset and real data from AudioSet, where SoloAudio achieves the state-of-the-art results on both in-domain and out-of-domain data, and exhibits impressive zero-shot and few-shot capabilities. Source code and demos are released.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-prompted separation | Instr pro | SAJ2.65 | 11 | |
| Text-prompted separation | Speaker | SAJ2.26 | 9 | |
| Text-prompted separation | Instr(wild) | SAJ2.92 | 9 | |
| Text-prompted separation | Speech | SAJ3.45 | 9 | |
| Text-prompted separation | music | SAJ2.68 | 7 | |
| Text-prompted separation | General SFX | SAJ Score3.29 | 5 |