Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unicoder: A Universal Language Encoder by Pre-training with Multiple Cross-lingual Tasks

About

We present Unicoder, a universal language encoder that is insensitive to different languages. Given an arbitrary NLP task, a model can be trained with Unicoder using training data in one language and directly applied to inputs of the same task in other languages. Comparing to similar efforts such as Multilingual BERT and XLM, three new cross-lingual pre-training tasks are proposed, including cross-lingual word recovery, cross-lingual paraphrase classification and cross-lingual masked language model. These tasks help Unicoder learn the mappings among different languages from more perspectives. We also find that doing fine-tuning on multiple languages together can bring further improvement. Experiments are performed on two tasks: cross-lingual natural language inference (XNLI) and cross-lingual question answering (XQA), where XLM is our baseline. On XNLI, 1.8% averaged accuracy improvement (on 15 languages) is obtained. On XQA, which is a new cross-lingual dataset built by us, 5.5% averaged accuracy improvement (on French and German) is obtained.

Haoyang Huang, Yaobo Liang, Nan Duan, Ming Gong, Linjun Shou, Daxin Jiang, Ming Zhou• 2019

Related benchmarks

TaskDatasetResultRank
Image RetrievalFlickr30K
R@171.5
144
Text RetrievalFlickr30K
R@186.2
75
Natural Language InferenceXNLI 1.0 (test)
Accuracy78.5
38
Text RetrievalCOCO Caption
R@162.3
28
Image-Text RetrievalFlickr30k (test)--
21
Named Entity RecognitionXGLUE (test)
Score (de)71.8
6
Part-of-Speech TaggingXGLUE 1.0 (test)
AR Accuracy68.6
6
News ClassificationXGLUE News Classification (test)
Accuracy (DE)84.2
5
Cross-lingual Language UnderstandingXGLUE 1.0 (test)
Avg Score76.1
2
Showing 9 of 9 rows

Other info

Follow for update