Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Stabilizing Policy Optimization via Logits Convexity

About

While reinforcement learning (RL) has been central to the recent success of large language models (LLMs), RL optimization is notoriously unstable, especially when compared to supervised fine-tuning (SFT). In this work, we investigate the stability gap between SFT and RL from a gradient-based perspective, and show that the convexity of the SFT loss with respect to model logits plays a key role in enabling stable training. Our theoretical analysis demonstrates that this property induces favorable gradient directionality during optimization. In contrast, Proximal Policy Optimization (PPO), a widely adopted policy gradient algorithm utilizing a clipped surrogate objective, lacks this stabilizing property. Motivated by this observation, we propose Logits Convex Optimization (LCO), a simple yet effective policy optimization framework that aligns the learned policy with an optimal target derived from the original RL objective, thereby emulating the stabilizing effects of logits-level convexity. Extensive experiments across multiple model families show that our LCO framework consistently improves training stability and outperforms conventional RL methods on a broad range of benchmarks.

Hongzhan Chen, Tao Yang, Yuhua Zhu, Shiping Gao, Xiaojun Quan, Ting Yao• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval 2.0
Win Rate29.05
507
Multi-task Language UnderstandingMMLU
Accuracy72.11
321
Math ReasoningMATH500
Pass@1 Rate73.2
58
Multi-task Language UnderstandingMMLU-Redux
Accuracy76.71
44
Mathematical ReasoningMinerva Math
Avg@1 Accuracy24.26
40
Mathematical ReasoningAMC 23
Pass@155.5
37
Machine Reading ComprehensionQA-FEEDBACK (test)
Relevance44.9
22
Showing 7 of 7 rows

Other info

Follow for update