On Compositional Generalization of Neural Machine Translation
About
Modern neural machine translation (NMT) models have achieved competitive performance in standard benchmarks such as WMT. However, there still exist significant issues such as robustness, domain generalization, etc. In this paper, we study NMT models from the perspective of compositional generalization by building a benchmark dataset, CoGnition, consisting of 216k clean and consistent sentence pairs. We quantitatively analyze effects of various factors using compound translation error rate, then demonstrate that the NMT model fails badly on compositional generalization, although it performs remarkably well under traditional metrics.
Yafu Li, Yongjing Yin, Yulong Chen, Yue Zhang• 2021
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Semantic Parsing | COGS (generalization) | Accuracy (Generalization)85.5 | 25 | |
| Machine Translation | CoGnition compositional generalization (test) | Inst. Error Rate29.4 | 15 |
Showing 2 of 2 rows