Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Beyond Literal Mapping: Benchmarking and Improving Non-Literal Translation Evaluation

About

Large Language Models (LLMs) have significantly advanced Machine Translation (MT), applying them to linguistically complex domains-such as Social Network Services, literature etc. In these scenarios, translations often require handling non-literal expressions, leading to the inaccuracy of MT metrics. To systematically investigate the reliability of MT metrics, we first curate a meta-evaluation dataset focused on non-literal translations, namely MENT. MENT encompasses four non-literal translation domains and features source sentences paired with translations from diverse MT systems, with 7,530 human-annotated scores on translation quality. Experimental results reveal the inaccuracies of traditional MT metrics and the limitations of LLM-as-a-Judge, particularly the knowledge cutoff and score inconsistency problem. To mitigate these limitations, we propose RATE, a novel agentic translation evaluation framework, centered by a reflective Core Agent that dynamically invokes specialized sub-agents. Experimental results indicate the efficacy of RATE, achieving an improvement of at least 3.2 meta score compared with current metrics. Further experiments demonstrate the robustness of RATE to general-domain MT evaluation. Code and dataset are available at: https://github.com/BITHLP/RATE.

Yanzhi Tian, Cunxiang Wang, Zeming Liu, Heyan Huang, Wenbo Yu, Dawei Song, Jie Tang, Yuhang Guo• 2026

Related benchmarks

TaskDatasetResultRank
Machine Translation Meta-evaluationMENT ZH-EN
Meta Score80.4
30
Machine Translation Meta-evaluationMENT EN-ZH
Meta Score80.4
30
Machine Translation Meta-evaluationWMT En-De Metrics Shared Task (System-Level) 2023 (test)
Accuracy98.5
6
Machine Translation Meta-evaluationWMT En-De Metrics Shared Task (Segment-Level) 2023 (test)
Accuracy (Test)52.3
6
Showing 4 of 4 rows

Other info

Follow for update