UniWhisper: Efficient Continual Multi-task Training for Robust Universal Audio Representation
About
A universal audio representation should capture fine-grained speech cues and high-level semantics for environmental sounds and music in a single encoder. Existing encoders often excel in one domain but degrade in others. We propose UniWhisper, an efficient continual multi-task training framework that casts heterogeneous audio tasks into a unified instruction and answer format. This enables standard next-token training without task-specific heads and losses. We train it on 38k hours of public audio and assess the encoder using shallow MLP probes and k-nearest neighbors (kNN) on 20 tasks spanning speech, environmental sound, and music. UniWhisper reaches normalized weighted averages of 0.81 with MLP probes and 0.61 with kNN, compared to 0.64 and 0.46 for Whisper, while retaining strong speech performance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Musical Instrument Classification | NSynth | Accuracy70.7 | 106 | |
| Speech Emotion Recognition | RAVDESS | -- | 43 | |
| Music Genre Classification | GTZAN | Accuracy94.5 | 39 | |
| Speaker Identification | LibriSpeech MF | Score98.1 | 26 | |
| Language Identification | VoxLingua33 | Accuracy89.5 | 26 | |
| Speaker Counting | Libricount | Score64.4 | 26 | |
| Acoustic Event Classification | VocalSound | Normalized Score93 | 20 | |
| Automatic Speaker Verification | ASV 2015 | Normalized Score99 | 20 | |
| Music Genre Classification | FMA (Free Music Archive) | Normalized Score68.9 | 20 | |
| Speaker Identification | VoxCeleb 1 | Normalized Score45.5 | 20 |