M-MAD: Multidimensional Multi-Agent Debate for Advanced Machine Translation Evaluation
About
Recent advancements in large language models (LLMs) have given rise to the LLM-as-a-judge paradigm, showcasing their potential to deliver human-like judgments. However, in the field of machine translation (MT) evaluation, current LLM-as-a-judge methods fall short of learned automatic metrics. In this paper, we propose Multidimensional Multi-Agent Debate (M-MAD), a systematic LLM-based multi-agent framework for advanced LLM-as-a-judge MT evaluation. Our findings demonstrate that M-MAD achieves significant advancements by (1) decoupling heuristic MQM criteria into distinct evaluation dimensions for fine-grained assessments; (2) employing multi-agent debates to harness the collaborative reasoning capabilities of LLMs; (3) synthesizing dimension-specific results into a final evaluation judgment to ensure robust and reliable outcomes. Comprehensive experiments show that M-MAD not only outperforms all existing LLM-as-a-judge methods but also competes with state-of-the-art reference-based automatic metrics, even when powered by a suboptimal model like GPT-4o mini. Detailed ablations and analysis highlight the superiority of our framework design, offering a fresh perspective for LLM-as-a-judge paradigm. Our code and data are publicly available at https://github.com/SU-JIAYUAN/M-MAD.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Translation Meta-evaluation | MENT ZH-EN | Meta Score68.7 | 30 | |
| Machine Translation Meta-evaluation | MENT EN-ZH | Meta Score68.7 | 30 | |
| Machine Translation Evaluation Metric | WMT MQM 23 | Acc94.5 | 27 | |
| Next-Utterance Generation | ORCHID | Subjective Quality83.7 | 12 | |
| Machine Translation Meta-evaluation | WMT En-De Metrics Shared Task (Segment-Level) 2023 (test) | Accuracy (Test)55.5 | 6 | |
| Machine Translation Meta-evaluation | WMT En-De Metrics Shared Task (System-Level) 2023 (test) | Accuracy97 | 6 | |
| Debate Generation | Experiment 1 Input Set | Choose Rate15.79 | 4 | |
| Multi-turn Competitive Debate Simulation | ORCHID | S Score0.87 | 4 |