Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Information-Theoretic Reward Modeling for Stable RLHF: Detecting and Mitigating Reward Hacking

About

Despite the success of Reinforcement Learning from Human Feedback (RLHF) in aligning language models with human values, reward hacking-or reward over-optimization-remains a major challenge. We identify two key obstacles to its mitigation: (1) reward misgeneralization in reward modeling, where reward models overfit to spurious, preference-irrelevant features; and (2) the lack of suitable regularization during RL optimization, as existing token-level constraints often over-restrict the policy space. To address these issues, we propose InfoRM, an information-theoretic reward modeling framework based on the Information Bottleneck (IB) principle, which filters out preference-irrelevant information to alleviate reward misgeneralization. We further observe that reward-hacked responses manifest as pronounced outliers in InfoRM's IB latent space, measured by Mahalanobis distance from the SFT-induced distribution. Motivated by this, we introduce IBL, a distribution-level regularization that penalizes such deviations, effectively expanding the optimization landscape while maintaining alignment. We prove that IBL is theoretically equivalent to the pessimistic RL objective within the IB latent space. Finally, we present Mahalanobis Outlier Probability (MOP), a statistical metric for quantifying reward hacking severity, enabling principled hyperparameter tuning and online mitigation such as early stopping. Extensive experiments across diverse LLMs and datasets confirm the generality of our findings, the effectiveness of InfoRM and IBL, and the reliability of MOP as a diagnostic tool-collectively advancing the state of RLHF.

Yuchun Miao, Liang Ding, Sen Zhang, Rong Bao, Lefei Zhang, Dacheng Tao• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningSVAMP out-of-domain (test)
Accuracy53.9
50
Mathematical ReasoningASDiv Out of Distribution
Top-1 Accuracy (maj@1)46.3
35
Mathematical ReasoningGSM8K In-Distribution (test)
Accuracy71
5
Mathematical ReasoningMATH In-Distribution (test)
Final Answer Accuracy24.9
5
Mathematical ReasoningAlgebra222 Out-of-Distribution (test)
Final Answer Accuracy60.4
5
Mathematical ReasoningGSM-Hard Out-of-Distribution (test)
Final Answer Accuracy37.5
5
Mathematical ReasoningMAWPS Out-of-Distribution (test)
Accuracy51.4
5
Open-ended DialogueOpen-Ended Dialogue (in-distribution)
Helpful Score67.9
4
Mathematical ReasoningMathematical Reasoning (OOD)
Algebra222 Accuracy82.7
4
Open-ended DialogueOpen-Ended Dialogue (out-of-distribution)
MT-Bench66.7
4
Showing 10 of 11 rows

Other info

Follow for update