Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RLHF in an SFT Way: From Optimal Solution to Reward-Weighted Alignment

About

Reinforcement Learning from Human Feedback (RLHF) is crucial for aligning Large Language Models (LLMs) with human values. However, RLHF has been continuously challenged by its high complexity in implementation and computation consumption, specifically for online sampling-based methods like Proximal Policy Optimization (PPO) and Group Relative Policy Optimization (GRPO). Even with recent simplifications, such as Direct Preference Optimization (DPO) that designs an offline implicit reward learning objective relying on pre-collected preference datasets, the problems of over-fitting and training instability remain hindering the alignment process from the expected optimal performance. To address the existing challenges, we propose a novel simplification of RLHF from the perspective of variational inference, called Variational Alignment with Re-weighting (VAR). Specifically, by directly minimizing the distribution gap between the learning LLM policy and the optimal solution of RLHF, we transform the alignment objective into an offline reward-driven re-weighted supervised fine-tuning (SFT) form, which only requires minor adjustment on the SFT loss to obtain noticeable improvement on training stability and effectiveness. In comprehensive evaluation benchmarks, our objective empowers LLMs to outperform offline alignments, demonstrating superior performance in both helpfulness and harmlessness metrics (avg. $\uparrow7.16\%$ than DPO). Meanwhile, when compared to online sampling methods, our method is also comparable even better while significantly reducing computational overhead and accelerating convergence speed (over $5\times$ faster than GRPO), suggesting our approach as an efficient and effective solution in bridging the gap between efficiency and performance in LLM alignment.

Yuhao Du, Zhuo Li, Pengyu Cheng, Zhihong Chen, Yuejiao Xie, Xiang Wan, Anningzhe Gao• 2025

Related benchmarks

TaskDatasetResultRank
Alignment Reward EvaluationOffsetBias (test)
Reward66.44
50
Reward ScoringHHA benchmark
Harmlessness Score (Base)66.37
30
Mathematical ReasoningGSM8K 4-shot
Score74.3
27
Alignment Reward EvaluationHHA (test)
Harmless Score64
20
Code GenerationHumanEval 0-shot--
14
General Language Model CapabilityMMLU, GSM8K, HumanEval, BBH Combined
Average Score68.42
8
Logical reasoningBBH 3-shot chain-of-thought
EM61.35
8
Multi-task Language UnderstandingMMLU 0-shot
Exact Match (EM)69.11
8
RLHF Alignment EvaluationHHA
Harmlessness Win Rate (Base, A)63.3
6
Conversational EvaluationArena-Hard 0.1
WR (%)10.8
3
Showing 10 of 11 rows

Other info

Follow for update