BLSP-KD: Bootstrapping Language-Speech Pre-training via Knowledge Distillation
About
Recent end-to-end approaches have shown promise in extending large language models (LLMs) to speech inputs, but face limitations in directly assessing and optimizing alignment quality and fail to achieve fine-grained alignment due to speech-text length mismatch. We introduce BLSP-KD, a novel approach for Bootstrapping Language-Speech Pretraining via Knowledge Distillation, which addresses these limitations through two key techniques. First, it optimizes speech-text alignment by minimizing the divergence between the LLM's next-token prediction distributions for speech and text inputs using knowledge distillation. Second, it employs a continuous-integrate-andfire strategy to segment speech into tokens that correspond one-to-one with text tokens, enabling fine-grained alignment. We also introduce Partial LoRA (PLoRA), a new adaptation method supporting LLM finetuning for speech inputs under knowledge distillation. Quantitative evaluation shows that BLSP-KD outperforms previous end-to-end baselines and cascaded systems with comparable scale of parameters, facilitating general instruction-following capabilities for LLMs with speech inputs. This approach provides new possibilities for extending LLMs to spoken language interactions.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Speech Translation | CoVoST-2 (test) | Avg BLEU (15 Dir)30.5 | 46 | |
| Speech Translation | MuST-C (test) | -- | 29 | |
| Audio Understanding | MMAU (test) | Speech Score53.01 | 25 | |
| Audio-conditioned reasoning | MMSU | Acc54.68 | 8 | |
| Audio-conditioned reasoning | OBQA | Accuracy74.72 | 8 | |
| Audio-conditioned reasoning | GSM8K | Accuracy46.57 | 8 |