Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GEMBA-MQM: Detecting Translation Quality Error Spans with GPT-4

About

This paper introduces GEMBA-MQM, a GPT-based evaluation metric designed to detect translation quality errors, specifically for the quality estimation setting without the need for human reference translations. Based on the power of large language models (LLM), GEMBA-MQM employs a fixed three-shot prompting technique, querying the GPT-4 model to mark error quality spans. Compared to previous works, our method has language-agnostic prompts, thus avoiding the need for manual prompt preparation for new languages. While preliminary results indicate that GEMBA-MQM achieves state-of-the-art accuracy for system ranking, we advise caution when using it in academic works to demonstrate improvements over other methods due to its dependence on the proprietary, black-box GPT model.

Tom Kocmi, Christian Federmann• 2023

Related benchmarks

TaskDatasetResultRank
Machine Translation Meta-evaluationWMT Metrics Shared Task Segment-level 2023 (Primary submissions)
Avg Correlation0.639
33
Machine Translation Evaluation MetricWMT MQM 23
Acc94.5
27
Machine Translation EvaluationWMT MQM 2022 (test)
Accuracy (System, 3 LPs)84.7
16
Showing 3 of 3 rows

Other info

Follow for update