Unlocking Strong Supervision: A Data-Centric Study of General-Purpose Audio Pre-Training Methods
About
Current audio pre-training seeks to learn unified representations for broad audio understanding tasks, but it remains fragmented and is fundamentally bottlenecked by its reliance on weak, noisy, and scale-limited labels. Drawing lessons from vision's foundational pre-training blueprint, we argue that the audio field must first establish its own large-scale, strong supervision framework. We introduce a new data-centric pipeline that leverages a high-fidelity captioner to create SOTA-quality captions and the first Unified Tag System (UTS) that bridges speech, music, and environmental sounds. We then conduct a systematic comparative study of different pre-training objectives on these strong source data. Our experiments suggest that data quality and coverage are the primary drivers of performance, while the choice of objective dictates downstream task specialization.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Musical Instrument Classification | NSynth | Accuracy63.62 | 106 | |
| Environmental Sound Classification | FSD50K | mAP48.5 | 91 | |
| Audio Classification | VGG-Sound | Top-1 Accuracy40.81 | 83 | |
| Audio Captioning | AudioCaps | -- | 47 | |
| Text-to-Audio Retrieval | AudioCaps | Recall@129.66 | 35 | |
| Emotion Recognition | CREMA-D | -- | 23 | |
| Music-to-Text Retrieval | MusicCaps | R@119.8 | 12 | |
| Audio Tagging | MagnaTagATune (MTAT) | mAP39.6 | 11 | |
| Audio Tagging | AudioSet Strong | mAP14 | 9 | |
| Speaker Identification | VoxCeleb2 | Accuracy38.78 | 9 |