Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Proximal Supervised Fine-Tuning

About

Supervised fine-tuning (SFT) of foundation models often leads to poor generalization, where prior capabilities deteriorate after tuning on new tasks or domains. Inspired by trust-region policy optimization (TRPO) and proximal policy optimization (PPO) in reinforcement learning (RL), we propose Proximal SFT (PSFT). This fine-tuning objective incorporates the benefits of trust-region, effectively constraining policy drift during SFT while maintaining competitive tuning. By viewing SFT as a special case of policy gradient methods with constant positive advantages, we derive PSFT that stabilizes optimization and leads to generalization, while leaving room for further optimization in subsequent post-training stages. Experiments across mathematical and human-value domains show that PSFT matches SFT in-domain, outperforms it in out-of-domain generalization, remains stable under prolonged training without causing entropy collapse, and provides a stronger foundation for the subsequent optimization.

Wenhong Zhu, Ruobing Xie, Rui Wang, Xingwu Sun, Di Wang, Pengfei Liu• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Mathematical Multimodal ReasoningMathVerse
Accuracy44.14
221
Mathematical Multimodal ReasoningMathVista
Accuracy72.2
218
Question AnsweringTruthfulQA
Accuracy80.19
152
Massive Multi-discipline Multimodal UnderstandingMMMU
Accuracy43.33
152
Mathematical ReasoningAMC
Accuracy (%)44.84
134
Mathematical ReasoningMinerva
Pass@1 Accuracy32.26
90
LLM Alignment EvaluationAlpacaEval 2
LC Win Rate23.29
86
Mathematical ReasoningOlympiadBench
Accuracy36.02
81
Mathematical ReasoningMATH 500--
76
Showing 10 of 38 rows

Other info

Follow for update