Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

E-BERT: Efficient-Yet-Effective Entity Embeddings for BERT

About

We present a novel way of injecting factual knowledge about entities into the pretrained BERT model (Devlin et al., 2019): We align Wikipedia2Vec entity vectors (Yamada et al., 2016) with BERT's native wordpiece vector space and use the aligned entity vectors as if they were wordpiece vectors. The resulting entity-enhanced version of BERT (called E-BERT) is similar in spirit to ERNIE (Zhang et al., 2019) and KnowBert (Peters et al., 2019), but it requires no expensive further pretraining of the BERT encoder. We evaluate E-BERT on unsupervised question answering (QA), supervised relation classification (RC) and entity linking (EL). On all three tasks, E-BERT outperforms BERT and other baselines. We also show quantitatively that the original BERT model is overly reliant on the surface form of entity names (e.g., guessing that someone with an Italian-sounding name speaks Italian), and that E-BERT mitigates this problem.

Nina Poerner, Ulli Waltinger, Hinrich Sch\"utze• 2019

Related benchmarks

TaskDatasetResultRank
Relation ExtractionWiki80
Accuracy0.854
51
Entity LinkingAIDA (testb)
Micro F185
28
Entity TypingWiki-ET
F1 Score78.4
24
Entity LinkingAIDA (testa)
Micro F190.8
23
Showing 4 of 4 rows

Other info

Follow for update