Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ChatEval: Towards Better LLM-based Evaluators through Multi-Agent Debate

About

Text evaluation has historically posed significant challenges, often demanding substantial labor and time cost. With the emergence of large language models (LLMs), researchers have explored LLMs' potential as alternatives for human evaluation. While these single-agent-based approaches show promise, experimental results suggest that further advancements are needed to bridge the gap between their current effectiveness and human-level evaluation quality. Recognizing that best practices of human evaluation processes often involve multiple human annotators collaborating in the evaluation, we resort to a multi-agent debate framework, moving beyond single-agent prompting strategies. The multi-agent-based approach enables a group of LLMs to synergize with an array of intelligent counterparts, harnessing their distinct capabilities and expertise to enhance efficiency and effectiveness in handling intricate tasks. In this paper, we construct a multi-agent referee team called ChatEval to autonomously discuss and evaluate the quality of generated responses from different models on open-ended questions and traditional natural language generation (NLG) tasks. Our analysis shows that ChatEval transcends mere textual scoring, offering a human-mimicking evaluation process for reliable assessments. Our code is available at https://github.com/chanchimin/ChatEval.

Chi-Min Chan, Weize Chen, Yusheng Su, Jianxuan Yu, Wei Xue, Shanghang Zhang, Jie Fu, Zhiyuan Liu• 2023

Related benchmarks

TaskDatasetResultRank
Question AnsweringARC Challenge--
906
Mathematical ReasoningMATH
Accuracy49.9
882
Long-context Language UnderstandingLongBench
M-Avg53.56
292
Science Question AnsweringARC-C--
193
Graduate-level Question AnsweringGPQA
Accuracy31.1
184
Question AnsweringSQuAD
Exact Match87.33
83
Summarization EvaluationSummEval
Avg Spearman Rho0.528
45
Language UnderstandingMMLU
RA73
31
Dialogue Evaluation Human CorrelationTopical-Chat
Naturalness Pearson (r)0.62
26
Social Risks (2-class) EvaluationValEval Generalized
Accuracy92.16
16
Showing 10 of 44 rows

Other info

Follow for update