QAFactEval: Improved QA-Based Factual Consistency Evaluation for Summarization
About
Factual consistency is an essential quality of text summarization models in practical settings. Existing work in evaluating this dimension can be broadly categorized into two lines of research, entailment-based and question answering (QA)-based metrics, and different experimental setups often lead to contrasting conclusions as to which paradigm performs the best. In this work, we conduct an extensive comparison of entailment and QA-based metrics, demonstrating that carefully choosing the components of a QA-based metric, especially question generation and answerability classification, is critical to performance. Building on those insights, we propose an optimized metric, which we call QAFactEval, that leads to a 14% average improvement over previous QA-based metrics on the SummaC factual consistency benchmark, and also outperforms the best-performing entailment-based metric. Moreover, we find that QA-based and entailment-based metrics can offer complementary signals and be combined into a single metric for a further performance boost.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Factual Consistency Evaluation | SummaC | CGS83.4 | 52 | |
| Factual Consistency Evaluation | QAGS XSUM | Spearman Correlation44.1 | 39 | |
| Factual Consistency Evaluation | QAGS CNNDM | Spearman Correlation63.1 | 38 | |
| Factual Consistency Evaluation | TRUE benchmark | PAWS (AUC-ROC)86.1 | 37 | |
| Factual Consistency Evaluation | SummEval | Spearman Correlation42.8 | 36 | |
| Opinion Summarization Metric Evaluation | OPINSUMMEVAL | Aspect Relevance45 | 32 | |
| Factual Consistency Evaluation | SamSum | Spearman Correlation35.9 | 30 | |
| Factual Consistency Evaluation | FRANK-XSum (FRK-X) | Spearman Correlation25.5 | 30 | |
| Factual Consistency Evaluation | FRANK CNNDM | Spearman Correlation53.7 | 30 | |
| Factual Consistency Evaluation | SUMMEVAL (test) | Pearson CC61.6 | 22 |