Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MUKA: Multi Kernel Audio Adaptation Of Audio-Language Models

About

Multimodal foundation models have demonstrated impressive generalization capabilities, yet efficiently adapting them to new tasks in a few-shot setting remains a critical challenge. In this work, we investigate the few-shot adaptation of Large Audio-Language Models (ALMs) through both training-based and training-free approaches. We introduce MUKA, a multi-kernel adaptation framework that combines the fine-grained, context-dependent representations of instruction-tuning based models like Pengi with the global semantic representations of contrastive pretraining methods like CLAP. By constructing a product kernel that aligns local similarity with global semantics, MUKA enhances representational power while preserving the theoretical guarantees of kernel methods and avoiding additional training. Extensive experiments across 11 diverse audio datasets demonstrate that MUKA achieves state-of-the-art performance among training-free methods and even surpasses training-based adapters in several scenarios, offering a compelling balance between adaptability and efficiency.

Reda Bensaid, Amine Ouasfi, Yassir Bendou, Ilyass Moummad, Vincent Gripon, Fran\c{c}ois Leduc-Primeau, Adnane Boukhayma• 2026

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC50 (test)
R@1 Accuracy0.9803
28
Urban Sound ClassificationUrbanSound8K (test)
Accuracy88.8
28
Audio ClassificationCREMA-D (test)
Accuracy45.06
9
Audio ClassificationESC50 Actions (test)
Accuracy99
7
Audio ClassificationGT-Music-Genre (test)
Accuracy83.17
7
Audio ClassificationNS-Instruments (test)
Accuracy73.24
7
Audio ClassificationSESA (test)
Accuracy90.16
7
Audio ClassificationTUT 2017 (test)
Accuracy82.88
7
Audio ClassificationVocalSound (test)
Accuracy85.52
7
Audio ClassificationBeijing-Opera (test)
Accuracy0.983
7
Showing 10 of 11 rows

Other info

Follow for update