Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

f-Divergence Minimization for Sequence-Level Knowledge Distillation

About

Knowledge distillation (KD) is the process of transferring knowledge from a large model to a small one. It has gained increasing attention in the natural language processing community, driven by the demands of compressing ever-growing language models. In this work, we propose an f-DISTILL framework, which formulates sequence-level knowledge distillation as minimizing a generalized f-divergence function. We propose four distilling variants under our framework and show that existing SeqKD and ENGINE approaches are approximations of our f-DISTILL methods. We further derive step-wise decomposition for our f-DISTILL, reducing intractable sequence-level divergence to word-level losses that can be computed in a tractable manner. Experiments across four datasets show that our methods outperform existing KD approaches, and that our symmetric distilling losses can better force the student to learn from the teacher distribution.

Yuqiao Wen, Zichao Li, Wenyu Du, Lili Mou• 2023

Related benchmarks

TaskDatasetResultRank
Arithmetic ReasoningGSM8K
Accuracy0.00e+0
173
Instruction FollowingUnNI
Rouge-L25.24
160
Instruction FollowingS-NI
Rouge-L24.58
119
Instruction FollowingDollyEval
Rouge-L23.88
114
Instruction FollowingSelf-Instruct
ROUGE-L11.03
48
Language ModelingSNLG and SNLU evaluation suites (test)
SNLG Score60.01
44
Table-to-text generationDART
METEOR0.3788
30
SummarizationDIALOGSUM
ROUGE-L27.49
27
Machine TranslationFlores-200
COMET73.73
23
DialogueAnthropic-HH (distillation set)
Response Word Count72.68
16
Showing 10 of 14 rows

Other info

Code

Follow for update