MUKA: Multi Kernel Audio Adaptation Of Audio-Language Models
About
Multimodal foundation models have demonstrated impressive generalization capabilities, yet efficiently adapting them to new tasks in a few-shot setting remains a critical challenge. In this work, we investigate the few-shot adaptation of Large Audio-Language Models (ALMs) through both training-based and training-free approaches. We introduce MUKA, a multi-kernel adaptation framework that combines the fine-grained, context-dependent representations of instruction-tuning based models like Pengi with the global semantic representations of contrastive pretraining methods like CLAP. By constructing a product kernel that aligns local similarity with global semantics, MUKA enhances representational power while preserving the theoretical guarantees of kernel methods and avoiding additional training. Extensive experiments across 11 diverse audio datasets demonstrate that MUKA achieves state-of-the-art performance among training-free methods and even surpasses training-based adapters in several scenarios, offering a compelling balance between adaptability and efficiency.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Audio Classification | ESC50 (test) | R@1 Accuracy0.9803 | 28 | |
| Urban Sound Classification | UrbanSound8K (test) | Accuracy88.8 | 28 | |
| Audio Classification | CREMA-D (test) | Accuracy45.06 | 9 | |
| Audio Classification | ESC50 Actions (test) | Accuracy99 | 7 | |
| Audio Classification | GT-Music-Genre (test) | Accuracy83.17 | 7 | |
| Audio Classification | NS-Instruments (test) | Accuracy73.24 | 7 | |
| Audio Classification | SESA (test) | Accuracy90.16 | 7 | |
| Audio Classification | TUT 2017 (test) | Accuracy82.88 | 7 | |
| Audio Classification | VocalSound (test) | Accuracy85.52 | 7 | |
| Audio Classification | Beijing-Opera (test) | Accuracy0.983 | 7 |