Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

xCOMET: Transparent Machine Translation Evaluation through Fine-grained Error Detection

About

Widely used learned metrics for machine translation evaluation, such as COMET and BLEURT, estimate the quality of a translation hypothesis by providing a single sentence-level score. As such, they offer little insight into translation errors (e.g., what are the errors and what is their severity). On the other hand, generative large language models (LLMs) are amplifying the adoption of more granular strategies to evaluation, attempting to detail and categorize translation errors. In this work, we introduce xCOMET, an open-source learned metric designed to bridge the gap between these approaches. xCOMET integrates both sentence-level evaluation and error span detection capabilities, exhibiting state-of-the-art performance across all types of evaluation (sentence-level, system-level, and error span detection). Moreover, it does so while highlighting and categorizing error spans, thus enriching the quality assessment. We also provide a robustness analysis with stress tests, and show that xCOMET is largely capable of identifying localized critical errors and hallucinations.

Nuno M. Guerreiro, Ricardo Rei, Daan van Stigt, Luisa Coheur, Pierre Colombo, Andr\'e F.T. Martins• 2023

Related benchmarks

TaskDatasetResultRank
Machine Translation Meta-evaluationWMT Metrics Shared Task Segment-level 2023 (Primary submissions)
Avg Correlation0.697
33
Machine Translation Meta-evaluationMENT ZH-EN
Meta Score54.5
30
Machine Translation Meta-evaluationMENT EN-ZH
Meta Score54.5
30
Machine Translation Meta-evaluationWMT MQM (En-De, En-Es, Ja-Zh) 24
SPA86.1
28
Machine Translation Evaluation MetricWMT MQM 23
Acc92.8
27
Machine Translation EvaluationMSLC OOD 24
MT Empty Score73.79
12
Quality EstimationEn-Ml
Pearson r0.355
9
Error Span DetectionWMT24 (test)
SPA84.4
6
Showing 8 of 8 rows

Other info

Follow for update