Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Do We Need Distinct Representations for Every Speech Token? Unveiling and Exploiting Redundancy in Large Speech Language Models

About

Large Speech Language Models (LSLMs) typically operate at high token rates (tokens/s) to ensure acoustic fidelity, yet this results in sequence lengths that far exceed the underlying semantic content, incurring prohibitive inference costs. In this paper, we empirically revisit the necessity of such granular token-level processing. Through layer-wise oracle interventions, we unveil a structured redundancy hierarchy: while shallow layers encode essential acoustic details, deep layers exhibit extreme redundancy, allowing for aggressive compression. Motivated by these findings, we introduce Affinity Pooling, a training-free, similarity-based token merging mechanism. By strategically applying this method at both input and deep layers, we effectively compress speech representations without compromising semantic information. Extensive evaluations across three tasks demonstrate that our approach reduces prefilling FLOPs by 27.48\% while maintaining competitive accuracy. Practical deployment further confirms significant efficiency gains, yielding up to $\sim$1.7$\times$ memory savings and $\sim$1.1$\times$ faster time-to-first-token on long utterances. Our results challenge the necessity of fully distinct token representations, providing new perspectives on LSLM efficiency.

Bajian Xiang, Tingwei Guo, Xuan Chen, Yang Han• 2026

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER1.3
1156
Automatic Speech RecognitionLibriSpeech (test-other)
WER2.55
1151
Automatic Speech RecognitionLibriSpeech Other
WER3.77
96
Automatic Speech RecognitionLibriSpeech Clean
WER1.63
80
Automatic Speech RecognitionKeSpeech--
17
Speech-to-Text Question-AnsweringOBQA
Accuracy83.08
16
Question AnsweringSpeechTriviaQA
Accuracy21.58
15
Question AnsweringSDQA
Accuracy37.79
14
Speech TranslationCoVost2 en2zh
BLEU42.86
14
Speech-to-text TranslationCoVoST2 zh-en
BLEU23.06
9
Showing 10 of 13 rows

Other info

Follow for update