Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Model Can Help Itself: Reward-Free Self-Training for LLM Reasoning

About

Can language models improve their reasoning performance without external rewards, using only their own sampled responses for training? We show that they can. We propose Self-evolving Post-Training (SePT), a simple post-training method that alternates between self-generation and training on self-generated responses. It repeatedly samples questions, uses the model itself to generate low-temperature responses, and then finetunes the model on the self-generated data. In this self-training loop, we use an online data refresh mechanism, where each new batch is generated by the most recently updated model. Across six math reasoning benchmarks, SePT improves a strong no-training baseline, defined as the untuned base model evaluated at its best swept decoding temperature, on several tested models. In some settings, SePT can even approach the performance of Reinforcement Learning with Verifiable Rewards (RLVR). Additional ablations demonstrate the importance of online data refresh and temperature decoupling. Overall, our results identify a practical regime in which reasoning can be improved using self-generated supervision alone. Our code is available at https://github.com/ElementQi/SePT.

Mengqi Li, Lei Zhao, Anthony Man-Cho So, Ruoyu Sun, Xiao Li• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
IFEval Accuracy23.6
625
Graduate-level Question AnsweringGPQA
Accuracy30.6
184
ReasoningBig-Bench Hard (BBH)
Accuracy47.3
33
Multi-task Knowledge and ReasoningMMLU-Pro
Average Score @132.2
21
Math ReasoningMean of six math benchmarks
Pass@139.5
12
Multistep ReasoningMuSR
Accuracy41.5
3
Showing 6 of 6 rows

Other info

Follow for update