BERTScore: Evaluating Text Generation with BERT
About
We propose BERTScore, an automatic evaluation metric for text generation. Analogously to common metrics, BERTScore computes a similarity score for each token in the candidate sentence with each token in the reference sentence. However, instead of exact matches, we compute token similarity using contextual embeddings. We evaluate using the outputs of 363 machine translation and image captioning systems. BERTScore correlates better with human judgments and provides stronger model selection performance than existing metrics. Finally, we use an adversarial paraphrase detection task to show that BERTScore is more robust to challenging examples when compared to existing metrics.
Tianyi Zhang, Varsha Kishore, Felix Wu, Kilian Q. Weinberger, Yoav Artzi• 2019
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Captioning Evaluation | Composite | Kendall-c Tau_c30.1 | 92 | |
| Image Captioning Evaluation | Flickr8K Expert (test) | Kendall tau_c39.2 | 76 | |
| Image Captioning Evaluation | Flickr8k Expert | Kendall Tau-c (tau_c)46.7 | 73 | |
| Image Captioning Evaluation | Pascal-50S (test) | HC65.4 | 66 | |
| Image Captioning Evaluation | Flickr8K-CF (test) | Kendall tau_b22.8 | 65 | |
| Image Captioning Evaluation | Flickr8K-CF | Kendall-b Correlation (tau_b)22.8 | 62 | |
| Factual Consistency Evaluation | SummaC | CGS63.1 | 52 | |
| Metrics correlation with human judgment | WebNLG challenge 2017 | Spearman Correlation (rho)0.81 | 45 | |
| Summarization Evaluation | SummEval | Coherence33.3 | 41 | |
| Summarization Evaluation | SummEval | Avg Spearman Rho0.225 | 40 |
Showing 10 of 176 rows
...