AudioKV: KV Cache Eviction in Efficient Large Audio Language Models
About
Large Audio-Language Models (LALMs) have set new benchmarks in speech processing, yet their deployment is hindered by the memory footprint of the Key-Value (KV) cache during long-context inference. While general KV cache compression techniques excel in LLMs, they often fail in the audio domain by overlooking the intrinsic temporal continuity of acoustic signals. To bridge this gap, we propose AudioKV, a novel framework that robustly prioritizes audio-critical attention heads through a hardware-friendly semantic-acoustic alignment mechanism. Specifically, we identify these modality-specialized heads by analyzing attention scores in ASR tasks and dynamically allocate KV cache budgets preferentially to them. Furthermore, we introduce Spectral Score Smoothing (SSS), an FFT-based global filtering strategy designed to suppress high-frequency noise and recover smooth global trends from importance scores, ensuring more balanced token selection with unprecedented precision. Extensive evaluations across multiple LALMs, including Qwen and Gemma series, demonstrate that AudioKV significantly outperforms baselines while enhancing computational efficiency. Notably, at a 40% compression ratio, AudioKV maintains near-full accuracy on Qwen3-Omni-30B with only a 0.45% drop, whereas traditional methods suffer from catastrophic performance degradation and repetition. Our code will be released after acceptance.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Automatic Speech Recognition | ASR-ZH | WAR93.5 | 34 | |
| Question Answering | SpeechTriviaQA | Accuracy42.2 | 15 | |
| Speech-to-Speech Question-Answering | Llama Questions | Accuracy77.3 | 15 | |
| Audio Question Answering | speech-chatbot-alpaca-eval (S-Alpaca) | Accuracy66.7 | 8 | |
| Audio Question Answering | speech-web-questions (S-Web) | Accuracy55.5 | 8 | |
| Automatic Speech Recognition | ASR EN | WAR95 | 8 | |
| Speech Translation | ST-E2C | BLEU35.3 | 8 |