Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

WildReward: Learning Reward Models from In-the-Wild Human Interactions

About

Reward models (RMs) are crucial for the training of large language models (LLMs), yet they typically rely on large-scale human-annotated preference pairs. With the widespread deployment of LLMs, in-the-wild interactions have emerged as a rich source of implicit reward signals. This raises the question: Can we develop reward models directly from in-the-wild interactions? In this work, we explore this possibility by adopting WildChat as an interaction source and proposing a pipeline to extract reliable human feedback, yielding 186k high-quality instances for training WildReward via ordinal regression directly on user feedback without preference pairs. Extensive experiments demonstrate that WildReward achieves comparable or even superior performance compared to conventional reward models, with improved calibration and cross-sample consistency. We also observe that WildReward benefits directly from user diversity, where more users yield stronger reward models. Finally, we apply WildReward to online DPO training and observe significant improvements across various tasks. Code and data are released at https://github.com/THU-KEG/WildReward.

Hao Peng, Yunjia Qi, Xiaozhi Wang, Zijun Yao, Lei Hou, Juanzi Li• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval--
292
Reward ModelingRewardBench
Accuracy86
70
Question AnsweringMMLU-Pro
Accuracy48.9
56
Reward ModelingJudgeBench
Accuracy66
45
Open-ended generationAlpacaEval 2.0--
43
Reward ModelingPPE Correctness
Accuracy65.6
33
Reward ModelingRM-Bench Hard
Accuracy0.697
10
Reward ModelingRM-Bench Normal
Accuracy78.4
10
Reward ModelingPPE Human
Accuracy62.5
10
Reward ModelingRM-Bench Easy
Accuracy83.5
10
Showing 10 of 11 rows

Other info

Follow for update