Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation

About

Recent advancements in large language models (LLMs) have given rise to the LLM-as-a-judge paradigm, showcasing their potential to deliver human-like judgments. However, in the field of machine translation (MT) evaluation, current LLM-as-a-judge methods fall short of learned automatic metrics. In this paper, we propose Multidimensional Multi-Agent Debate (M-MAD), a systematic LLM-based multi-agent framework for advanced LLM-as-a-judge MT evaluation. Our findings demonstrate that M-MAD achieves significant advancements by (1) decoupling heuristic MQM criteria into distinct evaluation dimensions for fine-grained assessments; (2) employing multi-agent debates to harness the collaborative reasoning capabilities of LLMs; (3) synthesizing dimension-specific results into a final evaluation judgment to ensure robust and reliable outcomes. Comprehensive experiments show that M-MAD not only outperforms all existing LLM-as-a-judge methods but also competes with state-of-the-art reference-based automatic metrics, even when powered by a suboptimal model like GPT-4o mini. Detailed ablations and analysis highlight the superiority of our framework design, offering a fresh perspective for LLM-as-a-judge paradigm. Our code and data are publicly available at https://github.com/SU-JIAYUAN/M-MAD.

Zhaopeng Feng, Jiayuan Su, Jiamei Zheng, Jiahan Ren, Yan Zhang, Jian Wu, Hongwei Wang, Zuozhu Liu• 2024

Related benchmarks

TaskDatasetResultRank
Machine Translation Meta-evaluationMENT ZH-EN
Meta Score68.7
30
Machine Translation Meta-evaluationMENT EN-ZH
Meta Score68.7
30
Machine Translation Evaluation MetricWMT MQM 23
Acc94.5
27
Next-Utterance GenerationORCHID
Subjective Quality83.7
12
Machine Translation Meta-evaluationWMT En-De Metrics Shared Task (Segment-Level) 2023 (test)
Accuracy (Test)55.5
6
Machine Translation Meta-evaluationWMT En-De Metrics Shared Task (System-Level) 2023 (test)
Accuracy97
6
Debate GenerationExperiment 1 Input Set
Choose Rate15.79
4
Multi-turn Competitive Debate SimulationORCHID
S Score0.87
4
Showing 8 of 8 rows

Other info

Follow for update