Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unifying Stable Optimization and Reference Regularization in RLHF

About

Reinforcement Learning from Human Feedback (RLHF) has advanced alignment capabilities significantly but remains hindered by two core challenges: \textbf{reward hacking} and \textbf{stable optimization}. Current solutions independently address these issues through separate regularization strategies, specifically a KL-divergence penalty against a supervised fine-tuned model ($\pi_0$) to mitigate reward hacking, and policy ratio clipping towards the current policy ($\pi_t$) to promote stable alignment. However, the implicit trade-off arising from simultaneously regularizing towards both $\pi_0$ and $\pi_t$ remains under-explored. In this paper, we introduce a unified regularization approach that explicitly balances the objectives of preventing reward hacking and maintaining stable policy updates. Our simple yet principled alignment objective yields a weighted supervised fine-tuning loss with a superior trade-off, which demonstrably improves both alignment results and implementation complexity. Extensive experiments across diverse benchmarks validate that our method consistently outperforms RLHF and online preference learning methods, achieving enhanced alignment performance and stability.

Li He, Qiang Qu, He Zhao, Stephen Wan, Dadong Wang, Lina Yao, Tongliang Liu• 2026

Related benchmarks

TaskDatasetResultRank
Multi-turn Conversation EvaluationMT-Bench 1.0 (test)
GPT-4 Score8.538
5
Instruction Following EvaluationAlpacaEval 2.0 (test)
LC% over π054.17
4
Showing 2 of 2 rows

Other info

Follow for update