Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Asking and Answering Questions to Evaluate the Factual Consistency of Summaries

About

Practical applications of abstractive summarization models are limited by frequent factual inconsistencies with respect to their input. Existing automatic evaluation metrics for summarization are largely insensitive to such errors. We propose an automatic evaluation protocol called QAGS (pronounced "kags") that is designed to identify factual inconsistencies in a generated summary. QAGS is based on the intuition that if we ask questions about a summary and its source, we will receive similar answers if the summary is factually consistent with the source. To evaluate QAGS, we collect human judgments of factual consistency on model-generated summaries for the CNN/DailyMail (Hermann et al., 2015) and XSUM (Narayan et al., 2018) summarization datasets. QAGS has substantially higher correlations with these judgments than other automatic evaluation metrics. Also, QAGS offers a natural form of interpretability: The answers and questions generated while computing QAGS indicate which tokens of a summary are inconsistent and why. We believe QAGS is a promising tool in automatically generating usable and factually consistent text.

Alex Wang, Kyunghyun Cho, Mike Lewis• 2020

Related benchmarks

TaskDatasetResultRank
Factuality EvaluationQAGS XSUM
Pearson Correlation0.175
19
Factuality EvaluationRank19
Accuracy72.1
13
Factuality EvaluationQAGS CNN
Pearson Correlation0.545
11
Factual Consistency EvaluationQAGS-X
Pearson Correlation (rp)0.175
5
Factual Consistency EvaluationQAGS-C
Pearson Correlation (rp)0.545
5
Showing 5 of 5 rows

Other info

Follow for update