Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DistiLLM-2: A Contrastive Approach Boosts the Distillation of LLMs

About

Despite the success of distillation in large language models (LLMs), most prior work applies identical loss functions to both teacher- and student-generated data. These strategies overlook the synergy between loss formulations and data types, leading to a suboptimal performance boost in student models. To address this, we propose DistiLLM-2, a contrastive approach that simultaneously increases the likelihood of teacher responses and decreases that of student responses by harnessing this synergy. Our extensive experiments show that DistiLLM-2 not only builds high-performing student models across a wide range of tasks, including instruction-following and code generation, but also supports diverse applications, such as preference alignment and vision-language extensions. These findings highlight the potential of a contrastive approach to enhance the efficacy of LLM distillation by effectively aligning teacher and student models across varied data types.

Jongwoo Ko, Tianyi Chen, Sungnyun Kim, Tianyu Ding, Luming Liang, Ilya Zharkov, Se-Young Yun• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy15.07
643
Code GenerationMBPP
Pass@145.63
175
Code GenerationHumanEval
Pass@138.14
108
Instruction FollowingDollyEval
Score38.28
106
Code GenerationLiveCodeBench
Pass@128.93
86
Instruction FollowingVicunaEval
VicunaEval Score36.8
80
ReasoningGPQA D
Accuracy13.47
29
Showing 7 of 7 rows

Other info

Follow for update