Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adaptive Discovery of Interpretable Audio Attributes with Multimodal LLMs for Low-Resource Classification

About

In predictive modeling for low-resource audio classification, extracting high-accuracy and interpretable attributes is critical. Particularly in high-reliability applications, interpretable audio attributes are indispensable. While human-driven attribute discovery is effective, its low throughput becomes a bottleneck. We propose a method for adaptively discovering interpretable audio attributes using Multimodal Large Language Models (MLLMs). By replacing humans in the AdaFlock framework with MLLMs, our method achieves significantly faster attribute discovery. Our method dynamically identifies salient acoustic characteristics via prompting and constructs an attribute-based ensemble classifier. Experimental results across various audio tasks demonstrate that our method outperforms direct MLLM prediction in the majority of evaluated cases. The entire training completes within 11 minutes, proving it a practical, adaptive solution that surpasses conventional human-reliant approaches.

Kosuke Yoshimura, Hisashi Kashima• 2026

Related benchmarks

TaskDatasetResultRank
Audio ClassificationESC-50 (test)
Accuracy87.15
87
Binary Audio ClassificationCREMA-D (test)
Mean Accuracy72.45
3
Binary Audio ClassificationRAVDESS (test)
Mean Accuracy68.55
3
Binary Audio ClassificationCoswara (test)
Mean Accuracy55.7
3
Showing 4 of 4 rows

Other info

Follow for update