Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Training of Reward Models

About

Reward modeling has emerged as a promising approach for the scalable alignment of language models. However, contemporary reward models (RMs) often lack robustness, awarding high rewards to low-quality, out-of-distribution (OOD) samples. This can lead to reward hacking, where policies exploit unintended shortcuts to maximize rewards, undermining alignment. To address this challenge, we introduce Adv-RM, a novel adversarial training framework that automatically identifies adversarial examples -- responses that receive high rewards from the target RM but are OOD and of low quality. By leveraging reinforcement learning, Adv-RM trains a policy to generate adversarial examples that reliably expose vulnerabilities in large state-of-the-art reward models such as Nemotron 340B RM. Incorporating these adversarial examples into the reward training process improves the robustness of RMs, mitigating reward hacking and enhancing downstream performance in RLHF. We demonstrate that Adv-RM significantly outperforms conventional RM training, increasing stability and enabling more effective RLHF training in both synthetic and real-data settings.

Alexander Bukharin, Haifeng Qian, Shengyang Sun, Adithya Renduchintala, Soumye Singhal, Zhilin Wang, Oleksii Kuchaiev, Olivier Delalleau, Tuo Zhao• 2025

Related benchmarks

TaskDatasetResultRank
Math ReasoningAMC
Accuracy63.3
70
Math ReasoningJEEBench
Accuracy62.4
60
Math ReasoningOlympiadBench
Accuracy77.6
54
Mathematical ReasoningMATH500
Accuracy89.2
30
Mathematical ReasoningAIME 25
Accuracy87.9
26
Showing 5 of 5 rows

Other info

Follow for update