Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

FastAdaSP: Multitask-Adapted Efficient Inference for Large Speech Language Model

About

In this study, we aim to explore Multitask Speech Language Model (SpeechLM) efficient inference via token reduction. Unlike other modalities such as vision or text, speech has unique temporal dependencies, making previous efficient inference works on other modalities not directly applicable. Furthermore, methods for efficient SpeechLM inference on long sequence and sparse signals remain largely unexplored. Then we propose FastAdaSP, a weighted token merging framework specifically designed for various speech-related tasks to improve the trade-off between efficiency and performance. Experimental results on WavLLM and Qwen-Audio show that our method achieves the state-of-the-art (SOTA) efficiency-performance trade-off compared with other baseline methods. Specifically, FastAdaSP achieved 7x memory efficiency and 1.83x decoding throughput without any degradation on tasks like Emotion Recognition (ER) and Spoken Question Answering (SQA). The code will be available at https://github.com/yichen14/FastAdaSP

Yichen Lu, Jiaqi Song, Chao-Han Huck Yang, Shinji Watanabe• 2024

Related benchmarks

TaskDatasetResultRank
Automatic Speech RecognitionLibriSpeech clean (test)
WER4.28
1156
Automatic Speech RecognitionLibriSpeech (test-other)
WER6.02
1151
Automatic Speech RecognitionLibriSpeech (dev-other)
WER6.15
462
Speech RecognitionLibriSpeech clean (dev)
WER0.0449
80
Automatic Speech RecognitionWenetSpeech Meeting (test)
CER11.51
78
Automatic Speech RecognitionWenetSpeech Net (test)
CER12.3
57
Automatic Speech RecognitionAISHELL-1
CER2.22
50
Automatic Speech RecognitionFleurs En
WER6.7
34
Automatic Speech RecognitionAISHELL-2
CER4.69
29
Automatic Speech RecognitionFleurs zh
CER4.27
26
Showing 10 of 10 rows

Other info

Follow for update