Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reward Auditor: Inference on Reward Modeling Suitability in Real-World Perturbed Scenarios

About

Reliable reward models (RMs) are critical for ensuring the safe alignment of large language models (LLMs). However, current RM evaluation methods focus solely on preference perception accuracies in given specific scenarios, obscuring the critical vulnerabilities of RMs in real-world scenarios. We identify the true challenge lies in assessing a novel dimension: Suitability, defined as conditional reliability under specific real-world perturbations. To this end, we introduce Reward Auditor, a hypothesis-testing framework specifically designed for RM suitability inference. Rather than answering "How accurate is the RM's preference perception for given samples?", it employs scientific auditing to answer: "Can we infer RMs exhibit systematic vulnerabilities in specific real-world scenarios?". Under real-world perturbed scenarios, Reward Auditor quantifies statistical significance and effect size by auditing distribution degradation of RM preference perception confidence. This enables inference of both the certainty and severity of RM vulnerabilities across diverse real-world scenarios. This lays a solid foundation for building next-generation LLM alignment systems that are verifiably safe, more robust, and trustworthy.

Jianxiang Zang, Yongda Wei, Ruxue Bai, Shiyu Jiang, Nijia Mo, Binhong Li, Qiang Sun, Hui Liu• 2025

Related benchmarks

TaskDatasetResultRank
Reward ModelingReward Bench Math--
52
Reward ModelingRM Bench Code--
52
Reward Model Suitability AuditRM-Bench Chat--
26
Reward ModelingReward Bench safety subset prompt perturbations 2--
26
Reward ModelingReward Bench safety subset response perturbations 2--
26
Reward Modeling Suitability EvaluationRM Bench Safety-accept--
26
Reward Modeling Suitability EvaluationRM Bench Math--
26
Showing 7 of 7 rows

Other info

Follow for update