Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Pretrained Cross-Lingual Language Models via Self-Labeled Word Alignment

About

The cross-lingual language models are typically pretrained with masked language modeling on multilingual text or parallel sentences. In this paper, we introduce denoising word alignment as a new cross-lingual pre-training task. Specifically, the model first self-labels word alignments for parallel sentences. Then we randomly mask tokens in a bitext pair. Given a masked token, the model uses a pointer network to predict the aligned token in the other language. We alternately perform the above two steps in an expectation-maximization manner. Experimental results show that our method improves cross-lingual transferability on various datasets, especially on the token-level tasks, such as question answering, and structured prediction. Moreover, the model can serve as a pretrained word aligner, which achieves reasonably low error rates on the alignment benchmarks. The code and pretrained parameters are available at https://github.com/CZWin32768/XLM-Align.

Zewen Chi, Li Dong, Bo Zheng, Shaohan Huang, Xian-Ling Mao, Heyan Huang, Furu Wei• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI (test)
Average Accuracy82.3
167
Cross-lingual Language UnderstandingXTREME
XNLI Accuracy76.2
38
Question AnsweringMLQA (test)
F1 Score73.4
35
Cross-lingual sentence retrievalTatoeba Parallel 14 language pairs--
14
Word AlignmentEuroParl en-de, en-fr, en-hi, en-ro WPT2003, WPT2005
AER (en-de)16.63
12
Cross-lingual sentence retrieval (en → xx)Tatoeba-36
Accuracy@155.5
11
Cross-lingual sentence retrieval (xx → en)Tatoeba-36
Average Accuracy@153.4
11
Cross-lingual TransferXTREME (test)
MLQA20.3
6
Showing 8 of 8 rows

Other info

Code

Follow for update