Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AdvJudge-Zero: Binary Decision Flips in LLM-as-a-Judge via Adversarial Control Tokens

About

Reward models and LLM-as-a-Judge systems are central to modern post-training pipelines such as RLHF, DPO, and RLAIF, where they provide scalar feedback and binary decisions that guide model selection and RL-based fine-tuning. We show that these judge systems exhibit a recurring vulnerability: short sequences of low-perplexity control tokens can flip many binary evaluations from correct ``No'' judgments to incorrect ``Yes'' judgments by steering the last-layer logit gap. These control tokens are patterns that a policy model could plausibly generate during post-training, and thus represent realistic reward-hacking risks rather than worst-case adversarial strings. Our method, AdvJudge-Zero, uses the model's next-token distribution and beam-search exploration to discover diverse control-token sequences from scratch, and our analysis shows that the induced hidden-state perturbations concentrate in a low-rank ``soft mode'' that is anti-aligned with the judge's refusal direction. Empirically, these tokens cause very high false positive rates when large open-weight and specialized judge models score incorrect answers on math and reasoning benchmarks. Finally, we show that LoRA-based adversarial training on small sets of control-token-augmented examples can markedly reduce these false positives while preserving evaluation quality.

Tung-Ling Li, Yuhao Wu, Hongliang Liu• 2025

Related benchmarks

TaskDatasetResultRank
Robustness EvaluationMATH
FPR (%)0.00e+0
20
Robustness EvaluationAIME
FPR0.00e+0
20
Robustness EvaluationGSM8K
FPR (%)0.01
20
Robustness EvaluationMultiRLVR
FPR (%)2.33
20
Showing 4 of 4 rows

Other info

Follow for update