Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Dual-Space Knowledge Distillation for Large Language Models

About

Knowledge distillation (KD) is known as a promising solution to compress large language models (LLMs) via transferring their knowledge to smaller models. During this process, white-box KD methods usually minimize the distance between the output distributions of the two models so that more knowledge can be transferred. However, in the current white-box KD framework, the output distributions are from the respective output spaces of the two models, using their own prediction heads. We argue that the space discrepancy will lead to low similarity between the teacher model and the student model on both representation and distribution levels. Furthermore, this discrepancy also hinders the KD process between models with different vocabularies, which is common for current LLMs. To address these issues, we propose a dual-space knowledge distillation (DSKD) framework that unifies the output spaces of the two models for KD. On the basis of DSKD, we further develop a cross-model attention mechanism, which can automatically align the representations of the two models with different vocabularies. Thus, our framework is not only compatible with various distance functions for KD (e.g., KL divergence) like the current framework, but also supports KD between any two LLMs regardless of their vocabularies. Experiments on task-agnostic instruction-following benchmarks show that DSKD significantly outperforms the current white-box KD framework with various distance functions, and also surpasses existing KD methods for LLMs with different vocabularies.

Songming Zhang, Xue Zhang, Zengkui Sun, Yufeng Chen, Jinan Xu• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@146.4
850
Mathematical ReasoningGSM8K (test)
Accuracy51.5
797
Physical Interaction Question AnsweringPIQA
Accuracy72.5
323
Boolean Question AnsweringBoolQ
Accuracy77.2
307
Sentence CompletionHellaSwag
Accuracy48.5
133
Commonsense ReasoningCommonsenseQA
Accuracy74.7
132
Multiple-choice Question AnsweringARC Easy
Accuracy73.1
122
Code GenerationMBPP
Pass@138.7
113
Multiple-choice Question AnsweringARC Challenge
Acc45.9
106
SummarizationDialogSum 1.5k examples (val)
ROUGE-L30.3
11
Showing 10 of 10 rows

Other info

Follow for update