Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

G-Eval: NLG Evaluation using GPT-4 with Better Human Alignment

About

The quality of texts generated by natural language generation (NLG) systems is hard to measure automatically. Conventional reference-based metrics, such as BLEU and ROUGE, have been shown to have relatively low correlation with human judgments, especially for tasks that require creativity and diversity. Recent studies suggest using large language models (LLMs) as reference-free metrics for NLG evaluation, which have the benefit of being applicable to new tasks that lack human references. However, these LLM-based evaluators still have lower human correspondence than medium-size neural evaluators. In this work, we present G-Eval, a framework of using large language models with chain-of-thoughts (CoT) and a form-filling paradigm, to assess the quality of NLG outputs. We experiment with two generation tasks, text summarization and dialogue generation. We show that G-Eval with GPT-4 as the backbone model achieves a Spearman correlation of 0.514 with human on summarization task, outperforming all previous methods by a large margin. We also propose preliminary analysis on the behavior of LLM-based evaluators, and highlight the potential issue of LLM-based evaluators having a bias towards the LLM-generated texts. The code is at https://github.com/nlpyang/geval

Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, Chenguang Zhu• 2023

Related benchmarks

TaskDatasetResultRank
Summarization EvaluationSummEval
Avg Spearman Rho0.533
40
Factual Consistency EvaluationQAGS XSUM
Spearman Correlation53.7
39
Factual Consistency EvaluationQAGS CNNDM
Spearman Correlation68.5
38
Factual Consistency EvaluationSummEval
Spearman Correlation0.507
36
Quantitative evaluation of LLM feedback against human gold standards50 SOC analysis reports (test)
Spearman Correlation (ρ)0.6
30
Dialogue Evaluation Human CorrelationTopical-Chat
Naturalness Pearson (r)0.632
26
Data-to-text evaluationSFHOT
Spearman Correlation0.364
24
Data-to-text evaluationSFRES
Spearman Correlation0.347
24
Social Risks (2-class) EvaluationValEval Disturb
Accuracy0.834
16
Social Risks (2-class) EvaluationValEval Generalized
Accuracy87.23
16
Showing 10 of 47 rows

Other info

Follow for update