Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Balancing Training for Multilingual Neural Machine Translation

About

When training multilingual machine translation (MT) models that can translate to/from multiple languages, we are faced with imbalanced training sets: some languages have much more training data than others. Standard practice is to up-sample less resourced languages to increase representation, and the degree of up-sampling has a large effect on the overall performance. In this paper, we propose a method that instead automatically learns how to weight training data through a data scorer that is optimized to maximize performance on all test languages. Experiments on two sets of languages under both one-to-many and many-to-one MT settings show our method not only consistently outperforms heuristic baselines in terms of average performance, but also offers flexible control over the performance of which languages are optimized.

Xinyi Wang, Yulia Tsvetkov, Graham Neubig• 2020

Related benchmarks

TaskDatasetResultRank
Text RetrievalBEIR-5 (test)
Avg. NDCG@1050.4
26
Multilingual Long Document RetrievalMLDR 13 (test)
NDCG@1056.7
18
Text RetrievalBEIR-5 all-MiniLM-L6-v2 (test)
Average NDCG@1042.5
14
Many-to-One Multilingual Machine TranslationTED-8-DIVERSE base (test)
BLEU27
14
One-to-Many Multilingual Machine TranslationTED-8-DIVERSE base (test)
BLEU18.24
14
Showing 5 of 5 rows

Other info

Follow for update