Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improve LLM-as-a-Judge Ability as a General Ability

About

LLM-as-a-Judge leverages the generative and reasoning capabilities of large language models (LLMs) to evaluate LLM responses across diverse scenarios, providing accurate preference signals. This approach plays a vital role in aligning LLMs with human values, ensuring ethical and reliable AI outputs that align with societal norms. Recent studies have raised many methods to train LLM as generative judges, but most of them are data consuming or lack accuracy, and only focus on LLM's judge ability. In this work, we regard judge ability as a general ability of LLM and implement a two-stage training approach, comprising supervised fine-tuning (SFT) warm-up and direct preference optimization (DPO) enhancement, to achieve judge style adaptation and improve judgment accuracy. Additionally, we introduce an efficient data synthesis method to generate judgmental content. Experimental results demonstrate that our approach, utilizing only about 2% to 40% of the data required by other methods, achieves SOTA performance on RewardBench. Furthermore, our training method enhances the general capabilities of the model by constructing complicated judge task, and the judge signals provided by our model have significantly enhanced the downstream DPO training performance of our internal models in our test to optimize policy model with Judge Model. We also open-source our model weights and training data to facilitate further research.

Jiachen Yu, Shaoning Sun, Xiaohui Hu, Jiaxu Yan, Kaidong Yu, Xuelong Li• 2025

Related benchmarks

TaskDatasetResultRank
Reward ModelingJudgeBench (test)
Overall66.3
40
Reward ModelingRM-Bench (test)
Overall Score72.3
39
Reward ModelingPPE Correctness (test)
PPE Corr60.4
26
Reward ModelingRewardBench (test)
RWBench0.927
25
Showing 4 of 4 rows

Other info

Follow for update