Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Human-like Summarization Evaluation with ChatGPT

About

Evaluating text summarization is a challenging problem, and existing evaluation metrics are far from satisfactory. In this study, we explored ChatGPT's ability to perform human-like summarization evaluation using four human evaluation methods on five datasets. We found that ChatGPT was able to complete annotations relatively smoothly using Likert scale scoring, pairwise comparison, Pyramid, and binary factuality evaluation. Additionally, it outperformed commonly used automatic evaluation metrics on some datasets. Furthermore, we discussed the impact of different prompts, compared its performance with that of human evaluation, and analyzed the generated explanations and invalid responses.

Mingqi Gao, Jie Ruan, Renliang Sun, Xunjian Yin, Shiping Yang, Xiaojun Wan• 2023

Related benchmarks

TaskDatasetResultRank
Factual Consistency EvaluationSummEval
Spearman Correlation0.433
36
Meta-evaluationSummEval--
10
Showing 2 of 2 rows

Other info

Follow for update