Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Self-Distillation Zero: Self-Revision Turns Binary Rewards into Dense Supervision

About

Current post-training methods in verifiable settings fall into two categories. Reinforcement learning (RLVR) relies on binary rewards, which are broadly applicable and powerful, but provide only sparse supervision during training. Distillation provides dense token-level supervision, typically obtained from an external teacher or using high-quality demonstrations. Collecting such supervision can be costly or unavailable. We propose Self-Distillation Zero (SD-Zero), a method that is substantially more training sample-efficient than RL and does not require an external teacher or high-quality demonstrations. SD-Zero trains a single model to play two roles: a Generator, which produces an initial response, and a Reviser, which conditions on that response and its binary reward to produce an improved response. We then perform on-policy self-distillation to distill the reviser into the generator, using the reviser's token distributions conditioned on the generator's response and its reward as supervision. In effect, SD-Zero trains the model to transform binary rewards into dense token-level self-supervision. On math and code reasoning benchmarks with Qwen3-4B-Instruct and Olmo-3-7B-Instruct, SD-Zero improves performance by at least 10% over the base models and outperforms strong baselines, including Rejection Fine-Tuning (RFT), GRPO, and Self-Distillation Fine-Tuning (SDFT), under the same question set and training sample budget. Extensive ablation studies show two novel characteristics of our proposed algorithm: (a) token-level self-localization, where the reviser can identify the key tokens that need to be revised in the generator's response based on reward, and (b) iterative self-evolution, where the improving ability to revise answers can be distilled back into generation performance with regular teacher synchronization.

Yinghui He, Simran Kaur, Adithya Bhaskar, Yongjin Yang, Jiarui Liu, Narutatsu Ri, Liam Fowl, Abhishek Panigrahi, Danqi Chen, Sanjeev Arora• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Mathematical ReasoningHMMT 2025--
70
Mathematical ReasoningAIME 2024
Mean Score (k=8)68.3
59
Mathematical ReasoningMath Benchmarks Aggregate--
44
Mathematical ReasoningAMO-Bench
Pass@836
20
Competitive ProgrammingCodeForces
Average Score @856.1
14
Competitive ProgrammingLiveCodeBench (LCB)
Avg@882.6
14
Math ReasoningHMMT25
Pass@866.7
14
Math ReasoningOpenR1
Pass@872
14
Mathematical ReasoningAMOBench
Avg@8 Score16
14
Showing 10 of 20 rows

Other info

Follow for update