Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-lingual Language Model Pretraining

About

Recent studies have demonstrated the efficiency of generative pretraining for English natural language understanding. In this work, we extend this approach to multiple languages and show the effectiveness of cross-lingual pretraining. We propose two methods to learn cross-lingual language models (XLMs): one unsupervised that only relies on monolingual data, and one supervised that leverages parallel data with a new cross-lingual language model objective. We obtain state-of-the-art results on cross-lingual classification, unsupervised and supervised machine translation. On XNLI, our approach pushes the state of the art by an absolute gain of 4.9% accuracy. On unsupervised machine translation, we obtain 34.3 BLEU on WMT'16 German-English, improving the previous state of the art by more than 9 BLEU. On supervised machine translation, we obtain a new state of the art of 38.5 BLEU on WMT'16 Romanian-English, outperforming the previous best approach by more than 4 BLEU. Our code and pretrained models will be made publicly available.

Guillaume Lample, Alexis Conneau• 2019

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT En-De 2014 (test)
BLEU28.8
379
Machine TranslationWMT En-Fr 2014 (test)
BLEU33.4
237
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L40.69
169
Natural Language InferenceXNLI (test)
Average Accuracy76.7
167
Machine TranslationWMT Ro-En 2016 (test)
BLEU35.6
82
Named Entity RecognitionNER (test)
F1 Score61.2
68
Machine TranslationWMT16 English-German (test)
BLEU27
58
Machine TranslationWMT16 EN-RO (test)
BLEU38.5
56
Machine TranslationWMT English-French 2014 (test)
BLEU33.4
41
Machine TranslationWMT16 German-English (test)
BLEU34.3
39
Showing 10 of 83 rows
...

Other info

Follow for update