Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Dual-Space Knowledge Distillation for Large Language Models

About

Knowledge distillation (KD) is known as a promising solution to compress large language models (LLMs) via transferring their knowledge to smaller models. During this process, white-box KD methods usually minimize the distance between the output distributions of the two models so that more knowledge can be transferred. However, in the current white-box KD framework, the output distributions are from the respective output spaces of the two models, using their own prediction heads. We argue that the space discrepancy will lead to low similarity between the teacher model and the student model on both representation and distribution levels. Furthermore, this discrepancy also hinders the KD process between models with different vocabularies, which is common for current LLMs. To address these issues, we propose a dual-space knowledge distillation (DSKD) framework that unifies the output spaces of the two models for KD. On the basis of DSKD, we further develop a cross-model attention mechanism, which can automatically align the representations of the two models with different vocabularies. Thus, our framework is not only compatible with various distance functions for KD (e.g., KL divergence) like the current framework, but also supports KD between any two LLMs regardless of their vocabularies. Experiments on task-agnostic instruction-following benchmarks show that DSKD significantly outperforms the current white-box KD framework with various distance functions, and also surpasses existing KD methods for LLMs with different vocabularies.

Songming Zhang, Xue Zhang, Zengkui Sun, Yufeng Chen, Jinan Xu• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@146.4
1036
Mathematical ReasoningGSM8K (test)
Accuracy51.5
900
Physical Interaction Question AnsweringPIQA
Accuracy72.5
333
Boolean Question AnsweringBoolQ
Accuracy77.2
323
Sentence CompletionHellaSwag
Accuracy48.5
276
Multiple-choice Question AnsweringARC Easy
Accuracy73.1
188
Instruction FollowingUnNI
Rouge-L22.13
160
Code GenerationMBPP
Pass@138.7
159
Commonsense ReasoningCommonsenseQA
Accuracy74.7
136
Instruction FollowingS-NI
Rouge-L19.22
119
Showing 10 of 15 rows

Other info

Follow for update