Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DeSTA2: Developing Instruction-Following Speech Language Model Without Speech Instruction-Tuning Data

About

Recent end-to-end speech language models (SLMs) have expanded upon the capabilities of large language models (LLMs) by incorporating pre-trained speech models. However, these SLMs often undergo extensive speech instruction-tuning to bridge the gap between speech and text modalities. This requires significant annotation efforts and risks catastrophic forgetting of the original language capabilities. In this work, we present a simple yet effective automatic process for creating speech-text pair data that carefully injects speech paralinguistic understanding abilities into SLMs while preserving the inherent language capabilities of the text-based LLM. Our model demonstrates general capabilities for speech-related tasks without the need for speech instruction-tuning data, achieving impressive performance on Dynamic-SUPERB and AIR-Bench-Chat benchmarks. Furthermore, our model exhibits the ability to follow complex instructions derived from LLMs, such as specific output formatting and chain-of-thought reasoning. Our approach not only enhances the versatility and effectiveness of SLMs but also reduces reliance on extensive annotated datasets, paving the way for more efficient and capable speech understanding systems.

Ke-Han Lu, Zhehuai Chen, Szu-Wei Fu, Chao-Han Huck Yang, Jagadeesh Balam, Boris Ginsburg, Yu-Chiang Frank Wang, Hung-yi Lee• 2024

Related benchmarks

TaskDatasetResultRank
Massive Multi-discipline Audio UnderstandingMMAU
Speech Score55.86
17
Spoken Intelligence EvaluationLLM_Voice 1.0 (test)
Remembering Score-257.5
13
Audio Instruction FollowingVoiceBench
AlpacaEval Score4.36
10
Audio UnderstandingDynamic-Superb (test)
CON Accuracy79.41
8
Audio ReasoningSAKURA
Single Score63
8
Instruction FollowingSpeech-IFEval
IF Rate89.23
7
Showing 6 of 6 rows

Other info

Follow for update