Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mitigating Reward Hacking in RLHF via Bayesian Non-negative Reward Modeling

About

Reward models learned from human preferences are central to aligning large language models (LLMs) via reinforcement learning from human feedback, yet they are often vulnerable to reward hacking due to noisy annotations and systematic biases such as response length or style. We propose Bayesian Non-Negative Reward Model (BNRM), a principled reward modeling framework that integrates non-negative factor analysis into Bradley-Terry (BT) preference model. BNRM represents rewards through a sparse, non-negative latent factor generative process that operates at two complementary levels: instance-specific latent variables induce disentangled reward representations, while sparsity over global latent factors acts as an implicit debiasing mechanism that suppresses spurious correlations. Together, this disentanglement-then-debiasing structure enables robust uncertainty-aware reward learning. To scale BNRM to modern LLMs, we develop an amortized variational inference network conditioned on deep model representations, allowing efficient end-to-end training. Extensive empirical results demonstrate that BNRM substantially mitigates reward over-optimization, improves robustness under distribution shifts, and yields more interpretable reward decompositions than strong baselines.

Zhibin Duan, Guowei Rong, Zhuo Li, Bo Chen, Mingyuan Zhou, Dandan Guo• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
850
Language UnderstandingMMLU
Accuracy70.72
756
Instruction FollowingIFEval
Accuracy (0-100)78.2
292
Common Sense ReasoningHellaSwag
Accuracy74.66
164
Reward ModelingRewardBench
Avg Score72.5
118
Logical reasoningBBH
Accuracy67.72
93
Reward ModelingUnified Feedback (UF)
Accuracy78.8
40
Question AnsweringTriviaQA 5shots
Accuracy71.99
30
Mathematical ReasoningGSM8K 4-shot
Score82.49
19
Reading ComprehensionRace 3shots
Accuracy83.31
14
Showing 10 of 13 rows

Other info

Follow for update