Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Regularizing Hidden States Enables Learning Generalizable Reward Model for LLMs

About

Reward models trained on human preference data have been proven to effectively align Large Language Models (LLMs) with human intent within the framework of reinforcement learning from human feedback (RLHF). However, current reward models have limited generalization capabilities to unseen prompts and responses, which can lead to an unexpected phenomenon known as reward over-optimization, resulting in a decline in actual performance due to excessive optimization of rewards. While previous research has advocated for constraining policy optimization, our study introduces a novel approach to enhance the reward model's generalization ability against distribution shifts by regularizing the hidden states. Specifically, we retain the base model's language model head and incorporate a suite of text-generation losses to preserve the hidden states' text-generation capabilities, while concurrently learning a reward head behind the same hidden states. Our experimental results demonstrate that the introduced regularization technique markedly improves the accuracy of learned reward models across a variety of out-of-distribution (OOD) tasks and effectively alleviates the over-optimization issue in RLHF, offering a more reliable and robust preference learning paradigm.

Rui Yang, Ruomeng Ding, Yong Lin, Huan Zhang, Tong Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Accuracy91.5
166
Reward ModelingRewardBench
Chat Score98.6
146
Reward ModelingRM-Bench
Accuracy70.3
125
Reward ModelingRMB
Accuracy70.2
120
Reward ModelingJudgeBench
Accuracy63.5
105
Reward ModelingRewardBench v2
Accuracy67.7
72
Reward ModelingPPE-Preference
Accuracy63.2
60
Reward Modeling EvaluationRM-Bench
Chat Score63.6
55
Reward ModelingUnified Feedback (UF)
Accuracy78.9
40
Reward ModelingPPE Correlation
Correlation62.8
40
Showing 10 of 26 rows

Other info

Code

Follow for update