Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FairJudge: An Adaptive, Debiased, and Consistent LLM-as-a-Judge

About

Existing LLM-as-a-Judge systems suffer from three fundamental limitations: limited adaptivity to task- and domain-specific evaluation criteria, systematic biases driven by non-semantic cues such as position, length, format, and model provenance, and evaluation inconsistency that leads to contradictory judgments across different evaluation modes (e.g., pointwise versus pairwise). To address these issues, we propose FairJudge, an adaptive, debiased, and consistent LLM-as-a-Judge. Unlike prior approaches that treat the judge as a static evaluator, FairJudge models judging behavior itself as a learnable and regularized policy. From a data-centric perspective, we construct a high-information-density judging dataset that explicitly injects supervision signals aligned with evaluation behavior. Building on this dataset, we adopt a curriculum-style SFT-DPO-GRPO training paradigm that progressively aligns rubric adherence, bias mitigation, and cross-mode consistency, while avoiding catastrophic forgetting. Experimental results on multiple internal and public benchmarks show that FairJudge consistently improves agreement and F1, reduces non-semantic biases, and outperforms substantially larger instruction-tuned LLMs. All resources will be publicly released after acceptance to facilitate future research.

Bo Yang, Lanfei Feng, Yunkui Chen, Yu Zhang, Xiao Xu, Shijian Li• 2026

Related benchmarks

TaskDatasetResultRank
LLM-as-a-JudgePandaLM Human Annotations (test)
Agreement0.7683
13
LLM-as-a-JudgeFairJudge Benchmark 1K (test)
Agreement71.5
13
LLM-as-a-JudgeJudgeLM (test)
Agreement78.82
13
Reward Modeling EvaluationReward-Bench
Agreement84.79
12
Showing 4 of 4 rows

Other info

Follow for update