Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Distilled Reasoner: On-Policy Self-Distillation for Large Language Models

About

Knowledge distillation improves large language model (LLM) reasoning by compressing the knowledge of a teacher LLM to train smaller LLMs. On-policy distillation advances this approach by having the student sample its own trajectories while a teacher LLM provides dense token-level supervision, addressing the distribution mismatch between training and inference in off-policy distillation methods. However, on-policy distillation typically requires a separate, often larger, teacher LLM and does not explicitly leverage ground-truth solutions available in reasoning datasets. Inspired by the intuition that a sufficiently capable LLM can rationalize external privileged reasoning traces and teach its weaker self, we introduce On-Policy Self-Distillation (OPSD), a learning algorithm where a single LLM acts as both teacher and student with different contexts. The teacher policy conditions on privileged information (e.g., verified reasoning traces) while the student policy sees only the question; training minimizes the per-token divergence between these distributions over the student's own rollouts. We demonstrate the efficacy of our method on multiple mathematical reasoning benchmarks, achieving superior token efficiency compared to reinforcement learning methods and better performance over off-policy distillation methods. Code repo: https://github.com/siyan-zhao/OPSD.

Siyan Zhao, Zhihui Xie, Mengchen Liu, Jing Huang, Guan Pang, Feiyu Chen, Aditya Grover• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningMMMU
Accuracy63.82
130
Multimodal ReasoningWeMath
Accuracy54.95
129
Multimodal ReasoningMathVision
Accuracy47.53
102
Multimodal ReasoningMathVista
Accuracy75.1
72
Mathematical ReasoningHMMT 2025--
70
Mathematical ReasoningAMO-Bench
Average@1614.3
12
Multimodal ReasoningZeroBench
Accuracy21.06
6
Showing 7 of 7 rows

Other info

Follow for update