Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Who Judges the Judge? LLM Jury-on-Demand: Building Trustworthy LLM Evaluation Systems

About

As Large Language Models (LLMs) become integrated into high-stakes domains, there is a growing need for evaluation methods that are both scalable for real-time deployment and reliable for critical decision-making. While human evaluation is reliable, it is slow and costly. Single LLM judges are biased, and static juries lack adaptability. To overcome these limitations, we propose LLM Jury-on-Demand - a dynamic, learning-based framework for scalable and context-aware evaluation. Our method trains a set of reliability predictors to assess when LLM judges will agree with human experts, leveraging token distributions, embeddings, and structural input features. This enables a fully adaptive evaluation where, for each data point, an optimal jury of the most reliable judges is dynamically selected, and their scores are aggregated using their reliability as weights. Experiments on summarization and RAG benchmarks show that our dynamic jury system achieves significantly higher correlation with human judgment than both single-judge and static-jury baselines. These results highlight the promise of adaptive, learning-based juries for building scalable, more reliable and trustworthy evaluation systems for modern LLMs in high-stakes domains.

Xiaochuan Li, Ke Wang, Girija Gouda, Shubham Choudhary, Yaqun Wang, Linwei Hu, Joel Vaughan, Freddy Lecue• 2025

Related benchmarks

TaskDatasetResultRank
Summarization EvaluationSummEval--
40
RelevanceALCE
Kendall's Tau0.61
15
RelevanceHotpotQA
Kendall's Tau0.9
15
CompletenessALCE
Kendall's Tau0.47
11
CompletenessASQA
Kendall's Tau0.54
11
CompletenessQasper
Kendall's Tau0.44
11
GroundednessCAQA
Kendall's Tau0.68
11
GroundednessRAGTruth
Kendall's Tau0.57
11
SummarizationSummEval
Completeness0.72
11
SummarizationUniSumEval
Completeness66
11
Showing 10 of 28 rows

Other info

Follow for update