Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking

About

In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data.

Thibault F\'evry, Nicholas FitzGerald, Livio Baldini Soares, Tom Kwiatkowski• 2020

Related benchmarks

TaskDatasetResultRank
Entity DisambiguationAIDA CoNLL (test)
In-KB Accuracy96.7
36
Entity DisambiguationZELDA Benchmark (test)
AIDA-B79.5
35
Entity LinkingAIDA (testb)
Micro F176.7
28
Entity LinkingAIDA (testa)
Micro F179.7
23
Entity LinkingTAC-KBP 2010 (test)
Accuracy94.9
16
Entity LinkingAIDA-B (test)
Micro F10.767
12
Entity DisambiguationCoNLL table P (test)
Accuracy96.7
7
End-to-end Entity LinkingCoNLL (test)
Micro F176.7
7
End-to-end Entity LinkingCoNLL (dev)
Micro F179.7
7
Entity DisambiguationCoNLL alias table H (test)
Accuracy92.5
5
Showing 10 of 11 rows

Other info

Follow for update