Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Judge-Aware Ranking Framework for Evaluating Large Language Models without Ground Truth

About

Evaluating large language models (LLMs) on open-ended tasks without ground-truth labels is increasingly done via the LLM-as-a-judge paradigm. A critical but under-modeled issue is that judge LLMs differ substantially in reliability; treating all judges equally can yield biased leaderboards and misleading uncertainty estimates. More data can make evaluation more confidently wrong under misspecified aggregation. We propose a judge-aware ranking framework that extends the Bradley-Terry-Luce model by introducing judge-specific discrimination parameters, jointly estimating latent model quality and judge reliability from pairwise comparisons without reference labels. We establish identifiability up to natural normalizations and prove consistency and asymptotic normality of the maximum likelihood estimator, enabling confidence intervals for score differences and rank comparisons. Across multiple public benchmarks and a newly collected dataset, our method improves agreement with human preferences, achieves higher data efficiency than unweighted baselines, and produces calibrated uncertainty quantification for LLM rankings.

Mingyuan Xu, Xinzi Tan, Jiawei Wu, Doudou Zhou• 2026

Related benchmarks

TaskDatasetResultRank
Conversational AI EvaluationChatbot Arena
Rank1
40
Multi-task Language UnderstandingMMLU
MMLU Score86.4
28
Multi-turn Dialogue EvaluationMT-Bench
MT-Bench Score8.99
5
Question AnsweringTruthfulQA
TruthfulQA Score30
3
Showing 4 of 4 rows

Other info

Follow for update