Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

XLM-K: Improving Cross-Lingual Language Model Pre-training with Multilingual Knowledge

About

Cross-lingual pre-training has achieved great successes using monolingual and bilingual plain text corpora. However, most pre-trained models neglect multilingual knowledge, which is language agnostic but comprises abundant cross-lingual structure alignment. In this paper, we propose XLM-K, a cross-lingual language model incorporating multilingual knowledge in pre-training. XLM-K augments existing multilingual pre-training with two knowledge tasks, namely Masked Entity Prediction Task and Object Entailment Task. We evaluate XLM-K on MLQA, NER and XNLI. Experimental results clearly demonstrate significant improvements over existing multilingual language models. The results on MLQA and NER exhibit the superiority of XLM-K in knowledge related tasks. The success in XNLI shows a better cross-lingual transferability obtained in XLM-K. What is more, we provide a detailed probing analysis to confirm the desired knowledge captured in our pre-training regimen. The code is available at https://github.com/microsoft/Unicoder/tree/master/pretraining/xlmk.

Xiaoze Jiang, Yaobo Liang, Weizhu Chen, Nan Duan• 2021

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL NER 2002/2003 (test)
German F1 Score73.3
59
Cross-lingual Question AnsweringMLQA v1.0 (test)
F1 (es)69.2
34
Showing 2 of 2 rows

Other info

Follow for update