Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Blending Supervised and Reinforcement Fine-Tuning with Prefix Sampling

About

Existing post-training techniques for large language models are broadly categorized into Supervised Fine-Tuning (SFT) and Reinforcement Fine-Tuning (RFT). Each paradigm presents a distinct trade-off: SFT excels at mimicking demonstration data but can lead to problematic generalization as a form of behavior cloning. Conversely, RFT can significantly enhance a model's performance but is prone to learn unexpected behaviors, and its performance is highly sensitive to the initial policy. In this paper, we propose a unified view of these methods and introduce Prefix-RFT, a hybrid approach that synergizes learning from both demonstration and exploration. Using mathematical reasoning problems as a testbed, we empirically demonstrate that Prefix-RFT is both simple and effective. It not only surpasses the performance of standalone SFT and RFT but also outperforms parallel mixed-policy RFT methods. A key advantage is its seamless integration into existing open-source frameworks, requiring only minimal modifications to the standard RFT pipeline. Our analysis highlights the complementary nature of SFT and RFT, and validates that Prefix-RFT effectively harmonizes these two learning paradigms. Furthermore, ablation studies confirm the method's robustness to variations in the quality and quantity of demonstration data. We hope this work offers a new perspective on LLM post-training, suggesting that a unified paradigm that judiciously integrates demonstration and exploration could be a promising direction for future research.

Zeyu Huang, Tianhao Cheng, Zihan Qiu, Zili Wang, Yinghui Xu, Edoardo M. Ponti, Ivan Titov• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningAIME 2024
Accuracy31.8
151
Mathematical ReasoningMinerva
Accuracy (Acc)40.3
62
Multi-task Language UnderstandingMMLU-Pro
Accuracy52.1
55
Mathematical ReasoningAMC 2023
Accuracy68.2
42
Mathematical ReasoningAIME 2025
Accuracy26.4
40
Mathematical ReasoningMATH
Accuracy88.4
26
Question AnsweringGPQA Diamond
Accuracy39.1
14
Showing 7 of 7 rows

Other info

Follow for update