Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VRM: Teaching Reward Models to Understand Authentic Human Preferences

About

Large Language Models (LLMs) have achieved remarkable success across diverse natural language tasks, yet the reward models employed for aligning LLMs often encounter challenges of reward hacking, where the approaches predominantly rely on directly mapping prompt-response pairs to scalar scores, which may inadvertently capture spurious correlations rather than authentic human preferences. In contrast, human evaluation employs a sophisticated process that initially weighs the relative importance of multiple high-dimensional objectives according to the prompt context, subsequently evaluating response quality through low-dimensional semantic features such as logical coherence and contextual appropriateness. Motivated by this consideration, we propose VRM, i.e., Variational Reward Modeling, a novel framework that explicitly models the evaluation process of human preference judgments by incorporating both high-dimensional objective weights and low-dimensional semantic features as latent variables, which are inferred through variational inference techniques. Additionally, we provide a theoretical analysis showing that VRM can achieve a tighter generalization error bound compared to the traditional reward model. Extensive experiments on benchmark datasets demonstrate that VRM significantly outperforms existing methods in capturing authentic human preferences.

Biao Liu, Ning Xu, Junming Yang, Hao Xu, Xin Geng• 2026

Related benchmarks

TaskDatasetResultRank
Reward ModelingRewardBench
Chat Score97.11
146
General Chat EvaluationArena Hard
Win Rate84
16
Instruction Following EvaluationAlpacaEval 2
Win Rate48.14
16
Multi-turn Chat EvaluationMT-Bench
MT-Bench Score8.58
16
Reward ModelingUltraFeedback Cleaned
Total Score92.36
8
Showing 5 of 5 rows

Other info

Follow for update