Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

QuestEval: Summarization Asks for Fact-based Evaluation

About

Summarization evaluation remains an open research problem: current metrics such as ROUGE are known to be limited and to correlate poorly with human judgments. To alleviate this issue, recent work has proposed evaluation metrics which rely on question answering models to assess whether a summary contains all the relevant information in its source document. Though promising, the proposed approaches have so far failed to correlate better than ROUGE with human judgments. In this paper, we extend previous approaches and propose a unified framework, named QuestEval. In contrast to established metrics such as ROUGE or BERTScore, QuestEval does not require any ground-truth reference. Nonetheless, QuestEval substantially improves the correlation with human judgments over four evaluation dimensions (consistency, coherence, fluency, and relevance), as shown in the extensive experiments we report.

Thomas Scialom, Paul-Alexis Dray, Patrick Gallinari, Sylvain Lamprier, Benjamin Piwowarski, Jacopo Staiano, Alex Wang• 2021

Related benchmarks

TaskDatasetResultRank
Factual Consistency EvaluationSummaC
CGS60.4
52
Factual Consistency EvaluationQAGS XSUM
Spearman Correlation11.9
39
Factual Consistency EvaluationQAGS CNNDM
Spearman Correlation30.8
38
Factual Consistency EvaluationTRUE benchmark
PAWS (AUC-ROC)69
37
Factual Consistency EvaluationSummEval
Spearman Correlation26.3
36
Factual Consistency EvaluationFRANK-XSum (FRK-X)
Spearman Correlation19.1
30
Factual Consistency EvaluationFRANK CNNDM
Spearman Correlation40.5
30
Factual Consistency EvaluationSamSum
Spearman Correlation3.9
30
Factual Consistency EvaluationXSumFaith (test)
Pearson Correlation Coefficient41.9
22
Factual Consistency EvaluationXSum-Faithful (XSF)
Spearman Correlation42.1
22
Showing 10 of 34 rows

Other info

Follow for update