Language-Aware Distillation for Multilingual Instruction-Following Speech LLMs with ASR-Only Supervision
About
Speech Large Language Models (LLMs) that understand and follow instructions in many languages are useful for real-world interaction, but are difficult to train with supervised fine-tuning, requiring large, task-specific speech corpora. While recent distillation-based approaches train performant English-only Speech LLMs using only annotated ASR data by aligning text and speech using only a lightweight projector, these models under-perform when scaled to multilingual settings due to language interference in the shared projector. We address this by introducing language-aware distillation using a query bank and a gating network that selects or mixes query tokens using a Q-Former projector. Our approach shows gains of 14% over matched multilingual distillation baselines on instruction following. We further synthesize Audio-MLQA, a multilingual spoken QA benchmark built on MLQA with high-quality TTS questions. Our best model improves over existing Speech LLM baselines by 32% on Audio-MLQA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Open-ended instruction following | Audio-OpenHermes | EN Score4.42 | 10 | |
| Close-ended Spoken Question Answering | Audio-MLQA | Score (EN)4.16 | 10 | |
| Open-ended instruction following | AlpacaEval Audio | EN Score4.58 | 10 |