Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Entropy-Aware On-Policy Distillation of Language Models

About

On-policy distillation is a promising approach for transferring knowledge between language models, where a student learns from dense token-level signals along its own trajectories. This framework typically uses reverse KL divergence, encouraging the student to match the teacher's high-confidence predictions. However, we show that the mode-seeking property of reverse KL reduces generation diversity and yields unstable learning signals when the teacher distribution has high entropy. To address this, we introduce Entropy-Aware On-Policy Distillation. Our key idea is augmenting the standard reverse KL objective with forward KL when teacher entropy is high, capturing the full range of plausible outputs while retaining precise imitation elsewhere. It balances mode-seeking precision with mode-covering robustness without sacrificing on-policy training efficiency. Experiments show that our method maintains generation diversity (sustained token-level entropy) and improves student-teacher alignment (lower forward KL on high-entropy tokens). Across six math reasoning benchmarks, this yields Pass@8 accuracy gains of +1.37 for Qwen3-0.6B-Base, +2.39 for Qwen3-1.7B-Base, and +5.05 for Qwen3-4B-Base compared to baseline on-policy distillation methods. These results demonstrate that accounting for teacher uncertainty is essential for maintaining diversity and achieving effective knowledge transfer.

Woogyeol Jin, Taywon Min, Yongjin Yang, Swanand Ravindra Kadhe, Yi Zhou, Dennis Wei, Nathalie Baracaldo, Kimin Lee• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval 2.0
Win Rate29.54
507
General ReasoningMMLU-Pro
pass@1 Accuracy43.2
69
Mathematical ReasoningOlympiadBench
Avg@843.24
21
Mathematical ReasoningMinerva
Avg@839.71
12
General ReasoningGPQA Diamond
Avg@831.5
4
Mathematical ReasoningMATH500
Avg@8 Score (MATH500)44.9
4
Mathematical ReasoningAIME 25
Average Score @82.08
4
Showing 7 of 7 rows

Other info

Follow for update