Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fast and Effective On-policy Distillation from Reasoning Prefixes

About

On-policy distillation (OPD), which samples trajectories from the student model and supervises them with a teacher at the token level, avoids relying solely on verifiable terminal rewards and can yield better generalization than off-policy distillation. However, OPD requires expensive on-the-fly sampling of the student policy during training, which substantially increases training cost, especially for long responses. Our initial analysis shows that, during OPD, training signals are often concentrated in the prefix of each output, and that even a short teacher-generated prefix can significantly help the student produce the correct answer. Motivated by these observations, we propose a simple yet effective modification of OPD: we apply the distillation objective only to prefixes of student-generated outputs and terminate each sampling early during distillation. Experiments on a suite of AI-for-Math and out-of-domain benchmarks show that on-policy prefix distillation matches the performance of full OPD while reducing training FLOP by 2x-47x.

Dongxu Zhang, Zhichao Yang, Sepehr Janghorbani, Jun Han, Andrew Ressler II, Qian Qian, Gregory D. Lyng, Sanjit Singh Batra, Robert E. Tillman• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500
Accuracy68.1
45
Mathematical ReasoningAIME 24
Mean Accuracy10.6
10
Multi-task Knowledge UnderstandingMMLU-Pro
Mean Accuracy41.3
10
Science Question AnsweringGPQA
Mean Accuracy24.6
10
Showing 4 of 4 rows

Other info

Follow for update