Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BLEURT: Learning Robust Metrics for Text Generation

About

Text generation has made significant advances in the last few years. Yet, evaluation metrics have lagged behind, as the most popular choices (e.g., BLEU and ROUGE) may correlate poorly with human judgments. We propose BLEURT, a learned evaluation metric based on BERT that can model human judgments with a few thousand possibly biased training examples. A key aspect of our approach is a novel pre-training scheme that uses millions of synthetic examples to help the model generalize. BLEURT provides state-of-the-art results on the last three years of the WMT Metrics shared task and the WebNLG Competition dataset. In contrast to a vanilla BERT-based approach, it yields superior results even when the training data is scarce and out-of-distribution.

Thibault Sellam, Dipanjan Das, Ankur P. Parikh• 2020

Related benchmarks

TaskDatasetResultRank
Factual Consistency EvaluationSummaC
CGS60.8
52
Summarization EvaluationSummEval
Coherence53.3
41
Factual Consistency EvaluationQAGS XSUM
Spearman Correlation12.4
39
Factual Consistency EvaluationQAGS CNNDM
Spearman Correlation43.4
38
Factual Consistency EvaluationTRUE benchmark
PAWS (AUC-ROC)68.4
37
Factual Consistency EvaluationSummEval
Spearman Correlation23.6
36
Machine Translation Meta-evaluationWMT Metrics Shared Task Segment-level 2023 (Primary submissions)
Avg Correlation0.622
33
Factual Consistency EvaluationFRANK-XSum (FRK-X)
Spearman Correlation13.9
30
Machine Translation Meta-evaluationMENT ZH-EN
Meta Score56.5
30
Machine Translation Meta-evaluationMENT EN-ZH
Meta Score56.5
30
Showing 10 of 54 rows

Other info

Follow for update