Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CHARM: Calibrating Reward Models With Chatbot Arena Scores

About

Reward models (RMs) play a crucial role in Reinforcement Learning from Human Feedback by serving as proxies for human preferences in aligning large language models. However, they suffer from various biases which could lead to reward hacking. In this paper, we identify a model preference bias in RMs, where they systematically assign disproportionately high scores to responses from certain policy models, leading to unfair judgments. To mitigate this bias, we propose a calibration method named CHatbot Arena calibrated Reward Modeling (CHARM) that leverages Elo scores from the Chatbot Arena to construct debiased preference datasets and adjust reward model scoring. We conduct extensive experiments on reward model benchmarks and human preference alignment. Results demonstrate that our calibrated RMs achieve improved evaluation accuracy on RM-Bench and the Chat-Hard domain of RewardBench, exhibit a stronger correlation with human preferences by producing scores more closely aligned with Elo rankings and improve downstream post-training performance. These results demonstrate that CHARM provides a simple, effective, and broadly applicable approach to building more reliable and fair reward models.

Xiao Zhu, Chenmien Tan, Pinzhen Chen, Rico Sennrich, Huiming Wang, Yanlin Zhang, Hanxu Hu• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
625
Reward Modeling EvaluationRM-Bench
Chat Score73.9
55
Reward ModelingRewardBench Chat-Hard
Chat-Hard Score89.4
15
Showing 3 of 3 rows

Other info

Follow for update