The FLORES-101 Evaluation Benchmark for Low-Resource and Multilingual Machine Translation
About
One of the biggest challenges hindering progress in low-resource and multilingual machine translation is the lack of good evaluation benchmarks. Current evaluation benchmarks either lack good coverage of low-resource languages, consider only restricted domains, or are low quality because they are constructed using semi-automatic procedures. In this work, we introduce the FLORES-101 evaluation benchmark, consisting of 3001 sentences extracted from English Wikipedia and covering a variety of different topics and domains. These sentences have been translated in 101 languages by professional translators through a carefully controlled process. The resulting dataset enables better assessment of model quality on the long tail of low-resource languages, including the evaluation of many-to-many multilingual translation systems, as all translations are multilingually aligned. By publicly releasing such a high-quality and high-coverage dataset, we hope to foster progress in the machine translation community and beyond.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Machine Translation | FLORES-101 (devtest) | -- | 30 | |
| Machine Translation | SIGMORPHON 2023 (test) | BLEU (Git)0.009 | 8 | |
| Machine Translation | GlossLM (test) | BLEU (Swahili)6.9 | 6 | |
| Machine Translation | GlossLM mid-high-resource languages | BLEU (Urdu)0.2 | 6 |