Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BLSP-KD: Bootstrapping Language-Speech Pre-training via Knowledge Distillation

About

Recent end-to-end approaches have shown promise in extending large language models (LLMs) to speech inputs, but face limitations in directly assessing and optimizing alignment quality and fail to achieve fine-grained alignment due to speech-text length mismatch. We introduce BLSP-KD, a novel approach for Bootstrapping Language-Speech Pretraining via Knowledge Distillation, which addresses these limitations through two key techniques. First, it optimizes speech-text alignment by minimizing the divergence between the LLM's next-token prediction distributions for speech and text inputs using knowledge distillation. Second, it employs a continuous-integrate-andfire strategy to segment speech into tokens that correspond one-to-one with text tokens, enabling fine-grained alignment. We also introduce Partial LoRA (PLoRA), a new adaptation method supporting LLM finetuning for speech inputs under knowledge distillation. Quantitative evaluation shows that BLSP-KD outperforms previous end-to-end baselines and cascaded systems with comparable scale of parameters, facilitating general instruction-following capabilities for LLMs with speech inputs. This approach provides new possibilities for extending LLMs to spoken language interactions.

Chen Wang, Minpeng Liao, Zhongqiang Huang, Jiajun Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Speech TranslationCoVoST-2 (test)
Avg BLEU (15 Dir)30.5
46
Speech TranslationMuST-C (test)--
29
Audio UnderstandingMMAU (test)
Speech Score53.01
25
Audio-conditioned reasoningMMSU
Acc54.68
8
Audio-conditioned reasoningOBQA
Accuracy74.72
8
Audio-conditioned reasoningGSM8K
Accuracy46.57
8
Showing 6 of 6 rows

Other info

Follow for update