Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Eliminating Inductive Bias in Reward Models with Information-Theoretic Guidance

About

Reward models (RMs) are essential in reinforcement learning from human feedback (RLHF) to align large language models (LLMs) with human values. However, RM training data is commonly recognized as low-quality, containing inductive biases that can easily lead to overfitting and reward hacking. For example, more detailed and comprehensive responses are usually human-preferred but with more words, leading response length to become one of the inevitable inductive biases. A limited number of prior RM debiasing approaches either target a single specific type of bias or model the problem with only simple linear correlations, \textit{e.g.}, Pearson coefficients. To mitigate more complex and diverse inductive biases in reward modeling, we introduce a novel information-theoretic debiasing method called \textbf{D}ebiasing via \textbf{I}nformation optimization for \textbf{R}M (DIR). Inspired by the information bottleneck (IB), we maximize the mutual information (MI) between RM scores and human preference pairs, while minimizing the MI between RM outputs and biased attributes of preference inputs. With theoretical justification from information theory, DIR can handle more sophisticated types of biases with non-linear correlations, broadly extending the real-world application scenarios for RM debiasing methods. In experiments, we verify the effectiveness of DIR with three types of inductive biases: \textit{response length}, \textit{sycophancy}, and \textit{format}. We discover that DIR not only effectively mitigates target inductive biases but also enhances RLHF performance across diverse benchmarks, yielding better generalization abilities. The code and training recipes are available at https://github.com/Qwen-Applications/DIR.

Zhuo Li, Pengyu Cheng, Zhechao Yu, Feifei Tong, Anningzhe Gao, Tsung-Hui Chang, Xiang Wan, Erchao Zhao, Xiaoxi Jiang, Guanjun Jiang• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
850
Mathematical ReasoningGSM8K
Accuracy (GSM8K)84.84
358
Instruction FollowingIFEval--
292
Multitask Language UnderstandingMMLU
Accuracy72.64
206
Common Sense ReasoningHellaSwag
Accuracy77.33
164
Reading ComprehensionRACE
Accuracy80.32
151
Logical reasoningBBH
Accuracy67.27
93
Instruction FollowingMT-Bench (test)--
27
ReasoningPROCESSBENCH
Accuracy27.73
20
Instruction FollowingAlpacaEval against gpt4-1106-preview (test)
Win Rate (Raw)31.3
10
Showing 10 of 13 rows

Other info

Follow for update