Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BARTScore: Evaluating Generated Text as Text Generation

About

A wide variety of NLP applications, such as machine translation, summarization, and dialog, involve text generation. One major challenge for these applications is how to evaluate whether such generated texts are actually fluent, accurate, or effective. In this work, we conceptualize the evaluation of generated text as a text generation problem, modeled using pre-trained sequence-to-sequence models. The general idea is that models trained to convert the generated text to/from a reference output or the source text will achieve higher scores when the generated text is better. We operationalize this idea using BART, an encoder-decoder based pre-trained model, and propose a metric BARTScore with a number of variants that can be flexibly applied in an unsupervised fashion to evaluation of text from different perspectives (e.g. informativeness, fluency, or factuality). BARTScore is conceptually simple and empirically effective. It can outperform existing top-scoring metrics in 16 of 22 test settings, covering evaluation of 16 datasets (e.g., machine translation, text summarization) and 7 different perspectives (e.g., informativeness, factuality). Code to calculate BARTScore is available at https://github.com/neulab/BARTScore, and we have released an interactive leaderboard for meta-evaluation at http://explainaboard.nlpedia.ai/leaderboard/task-meval/ on the ExplainaBoard platform, which allows us to interactively understand the strengths, weaknesses, and complementarity of each metric.

Weizhe Yuan, Graham Neubig, Pengfei Liu• 2021

Related benchmarks

TaskDatasetResultRank
Image Captioning EvaluationComposite
Kendall-c Tau_c43.5
92
Image Captioning EvaluationFlickr8k Expert
Kendall Tau-c (tau_c)37.8
73
Image Captioning EvaluationFlickr8K-CF
Kendall-b Correlation (tau_b)24.3
62
Factual Consistency EvaluationSummaC
CGS74.3
52
Summarization EvaluationSummEval
Coherence55
41
Summarization EvaluationSummEval
Avg Spearman Rho0.385
40
Factual Consistency EvaluationQAGS XSUM
Spearman Correlation17
39
Factual Consistency EvaluationQAGS CNNDM
Spearman Correlation68.1
38
Factual Consistency EvaluationTRUE benchmark
PAWS (AUC-ROC)77.1
37
Factual Consistency EvaluationSummEval
Spearman Correlation39.1
36
Showing 10 of 83 rows
...

Other info

Code

Follow for update