Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mitigating Reward Hacking in RLHF via Bayesian Non-negative Reward Modeling

About

Reward models learned from human preferences are central to aligning large language models (LLMs) via reinforcement learning from human feedback, yet they are often vulnerable to reward hacking due to noisy annotations and systematic biases such as response length or style. We propose Bayesian Non-Negative Reward Model (BNRM), a principled reward modeling framework that integrates non-negative factor analysis into Bradley-Terry (BT) preference model. BNRM represents rewards through a sparse, non-negative latent factor generative process that operates at two complementary levels: instance-specific latent variables induce disentangled reward representations, while sparsity over global latent factors acts as an implicit debiasing mechanism that suppresses spurious correlations. Together, this disentanglement-then-debiasing structure enables robust uncertainty-aware reward learning. To scale BNRM to modern LLMs, we develop an amortized variational inference network conditioned on deep model representations, allowing efficient end-to-end training. Extensive empirical results demonstrate that BNRM substantially mitigates reward over-optimization, improves robustness under distribution shifts, and yields more interpretable reward decompositions than strong baselines.

Zhibin Duan, Guowei Rong, Zhuo Li, Bo Chen, Mingyuan Zhou, Dandan Guo• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
1036
Language UnderstandingMMLU
Accuracy70.72
825
Instruction FollowingIFEval--
625
Common Sense ReasoningHellaSwag
Accuracy74.66
213
Logical reasoningBBH
Accuracy67.72
201
Reward ModelingRewardBench
Chat Score95.7
146
Reward ModelingUnified Feedback (UF)
Accuracy78.8
40
Question AnsweringTriviaQA 5shots
Accuracy71.99
30
Mathematical ReasoningGSM8K 4-shot
Score82.49
27
Reading ComprehensionRace 3shots
Accuracy83.31
14
Showing 10 of 13 rows

Other info

Follow for update