Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

mLUKE: The Power of Entity Representations in Multilingual Pretrained Language Models

About

Recent studies have shown that multilingual pretrained language models can be effectively improved with cross-lingual alignment information from Wikipedia entities. However, existing methods only exploit entity information in pretraining and do not explicitly use entities in downstream tasks. In this study, we explore the effectiveness of leveraging entity representations for downstream cross-lingual tasks. We train a multilingual language model with 24 languages with entity representations and show the model consistently outperforms word-based pretrained models in various cross-lingual transfer tasks. We also analyze the model and the key insight is that incorporating entity representations into the input allows us to extract more language-agnostic features. We also evaluate the model with a multilingual cloze prompt task with the mLAMA dataset. We show that entity-based prompt elicits correct factual knowledge more likely than using only word representations. Our source code and pretrained models are available at https://github.com/studio-ousia/luke.

Ryokan Ri, Ikuya Yamada, Yoshimasa Tsuruoka• 2021

Related benchmarks

TaskDatasetResultRank
Named Entity RecognitionCoNLL NER 2002/2003 (test)
German F1 Score78.3
59
Cross-lingual Question AnsweringMLQA v1.0 (test)
F1 (es)74.5
34
Question AnsweringXQuAD 1.0 (test)
F1 Score79.6
10
Knowledge ProbingmLAMA
Accuracy (DE)43.7
8
Question AnsweringXQuAD v1.1 (test)
F1 (en)89
8
Question AnsweringMLQA G-XLT v1.0 (test)
Avg Score67.7
8
Relation ExtractionRELX (test)
F1 (en)69.3
8
Cloze Prompt TaskmLAMA (test)
Accuracy (ar)42.4
6
Showing 8 of 8 rows

Other info

Code

Follow for update