Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SRFT: A Single-Stage Method with Supervised and Reinforcement Fine-Tuning for Reasoning

About

Large language models (LLMs) have achieved remarkable progress in reasoning tasks, yet the optimal integration of Supervised Fine-Tuning (SFT) and Reinforcement Learning (RL) remains a fundamental challenge. Through comprehensive analysis of token distributions, learning dynamics, and integration mechanisms from entropy-based perspectives, we reveal key differences between these paradigms: SFT induces coarse-grained global changes to LLM policy distributions, while RL performs fine-grained selective optimizations, with entropy serving as a critical indicator of training effectiveness. Building on these observations, we propose Supervised Reinforcement Fine-Tuning (SRFT), a single-stage method that unifies both fine-tuning paradigms through entropy-aware weighting mechanisms. Our approach simultaneously applies SFT and RL to directly optimize the LLM using demonstrations and self-exploration rollouts rather than through two-stage sequential methods. Extensive experiments show that SRFT achieves 59.1% average accuracy, outperforming zero-RL methods by 9.0% on five mathematical reasoning benchmarks and 10.9% on three out-of-distribution benchmarks.

Yuqian Fu, Tinghong Chen, Jiajun Chai, Xihuai Wang, Songjun Tu, Guojun Yin, Wei Lin, Qichao Zhang, Yuanheng Zhu, Dongbin Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH500 (test)--
514
Mathematical ReasoningAIME 2024 (test)--
159
Mathematical ReasoningOlympiadBench (test)
@1 Success Rate50
15
Mathematical ReasoningOut-of-Distribution Reasoning Suite ARC-c, GPQA-Diamond
ARC-c (pass@1)81.6
14
Mathematical ReasoningIn-Distribution Reasoning Suite (AIME 24, AIME 25, AMC, MATH-500, Minerva)
AIME 24 Pass@3230.7
14
Tool UseBFCL
Live Success Rate64.6
7
Showing 6 of 6 rows

Other info

Follow for update