Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Judge as A Judge: Improving the Evaluation of Retrieval-Augmented Generation through the Judge-Consistency of Large Language Models

About

Retrieval-Augmented Generation (RAG) has proven its effectiveness in alleviating hallucinations for Large Language Models (LLMs). However, existing automated evaluation metrics cannot fairly evaluate the outputs generated by RAG models during training and evaluation. LLM-based judgment models provide the potential to produce high-quality judgments, but they are highly sensitive to evaluation prompts, leading to inconsistencies when judging the output of RAG models. This paper introduces the Judge-Consistency (ConsJudge) method, which aims to enhance LLMs to generate more accurate evaluations for RAG models. Specifically, ConsJudge prompts LLMs to generate different judgments based on various combinations of judgment dimensions, utilize the judge-consistency to evaluate these judgments and select the accepted and rejected judgments for DPO training. Our experiments show that ConsJudge can effectively provide more accurate judgments for optimizing RAG models across various RAG models and datasets. Further analysis reveals that judgments generated by ConsJudge have a high agreement with the superior LLM. All codes are available at https://github.com/OpenBMB/ConsJudge.

Shuliang Liu, Xinze Li, Zhenghao Liu, Yukun Yan, Cheng Yang, Zheni Zeng, Zhiyuan Liu, Maosong Sun, Ge Yu• 2025

Related benchmarks

TaskDatasetResultRank
Retrieval-Augmented GenerationHotpotQA--
52
Retrieval-Augmented GenerationNQ
Accuracy48.78
23
Retrieval-Augmented GenerationTriviaQA
Accuracy88.26
11
Retrieval-Augmented GenerationASQA
str-EM42.44
11
Retrieval-Augmented GenerationMARCOQA
LLM Score88.25
11
Retrieval-Augmented GenerationWoW
LLM Score88.87
11
Showing 6 of 6 rows

Other info

Code

Follow for update