SICL-AT: Another way to adapt Auditory LLM to low-resource task
About
Auditory Large Language Models (LLMs) have demonstrated strong performance across a wide range of speech and audio understanding tasks. Nevertheless, they often struggle when applied to low-resource or unfamiliar tasks. In case of labeled in-domain data is scarce or mismatched to the true test distribution, direct fine-tuning can be brittle. In-Context Learning (ICL) provides a training-free, inference-time solution by adapting auditory LLMs through conditioning on a few in-domain demonstrations. In this work, we first show that \emph{Vanilla ICL}, improves zero-shot performance across diverse speech and audio tasks for selected models which suggest this ICL adaptation capability can be generalized to multimodal setting. Building on this, we propose \textbf{Speech In-Context Learning Adaptation Training (SICL-AT)}, a post-training recipe utilizes only high resource speech data intending to strengthen model's in-context learning capability. The enhancement can generalize to audio understanding/reasoning task. Experiments indicate our proposed method consistently outperforms direct fine-tuning in low-resource scenario.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Child's Automatic Speech Recognition | RSR | WER16.59 | 22 | |
| Audio Understanding | MMAU | Accuracy73.4 | 20 | |
| Audio Understanding / Audio Reasoning | MMAR | Accuracy61.4 | 13 | |
| Child's Automatic Speech Recognition | MyST | WER11.49 | 13 | |
| Multilingual Automatic Speech Recognition | CommonVoice | WER (de)4.42 | 13 | |
| Speech Translation | CoVoST2 en→ja | BLEU47.57 | 13 | |
| Speech Translation | CoVoST2 ja→en | BLEU26.46 | 13 |