Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BLSP: Bootstrapping Language-Speech Pre-training via Behavior Alignment of Continuation Writing

About

The emergence of large language models (LLMs) has sparked significant interest in extending their remarkable language capabilities to speech. However, modality alignment between speech and text still remains an open problem. Current solutions can be categorized into two strategies. One is a cascaded approach where outputs (tokens or states) of a separately trained speech recognition system are used as inputs for LLMs, which limits their potential in modeling alignment between speech and text. The other is an end-to-end approach that relies on speech instruction data, which is very difficult to collect in large quantities. In this paper, we address these issues and propose the BLSP approach that Bootstraps Language-Speech Pre-training via behavior alignment of continuation writing. We achieve this by learning a lightweight modality adapter between a frozen speech encoder and an LLM, ensuring that the LLM exhibits the same generation behavior regardless of the modality of input: a speech segment or its transcript. The training process can be divided into two steps. The first step prompts an LLM to generate texts with speech transcripts as prefixes, obtaining text continuations. In the second step, these continuations are used as supervised signals to train the modality adapter in an end-to-end manner. We demonstrate that this straightforward process can extend the capabilities of LLMs to speech, enabling speech recognition, speech translation, spoken language understanding, and speech conversation, even in zero-shot cross-lingual scenarios.

Chen Wang, Minpeng Liao, Zhongqiang Huang, Jinliang Lu, Junhong Wu, Yuchen Liu, Chengqing Zong, Jiajun Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Emotion RecognitionIEMOCAP
Accuracy41.24
71
Speech TranslationCoVoST-2 (test)--
46
Speech RecognitionMuST-C (test)
WER (Avg)44.51
30
Speech TranslationMuST-C (test)
BLEU Score28.7
29
Speech Question AnsweringMuST-C (test)
EM5.6
27
Speech Emotion RecognitionMELD
Accuracy50.47
19
Emotion RecognitionRAVDESS
Accuracy11.1
19
Emotion RecognitionIEMOCAP, MELD, RADESS, SAVEE Average
Average Accuracy36.02
17
Emotion ReasoningOverall (test)
Factual Alignment (FA)1.01
17
Emotion RecognitionSAVEE
Accuracy10.77
17
Showing 10 of 18 rows

Other info

Follow for update