Empirical Evaluation of Pretraining Strategies for Supervised Entity Linking
About
In this work, we present an entity linking model which combines a Transformer architecture with large scale pretraining from Wikipedia links. Our model achieves the state-of-the-art on two commonly used entity linking datasets: 96.7% on CoNLL and 94.9% on TAC-KBP. We present detailed analyses to understand what design choices are important for entity linking, including choices of negative entity candidates, Transformer architecture, and input perturbations. Lastly, we present promising results on more challenging settings such as end-to-end entity linking and entity linking without in-domain training data.
Thibault F\'evry, Nicholas FitzGerald, Livio Baldini Soares, Tom Kwiatkowski• 2020
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Entity Disambiguation | AIDA CoNLL (test) | In-KB Accuracy96.7 | 36 | |
| Entity Disambiguation | ZELDA Benchmark (test) | AIDA-B79.5 | 35 | |
| Entity Linking | AIDA (testb) | Micro F176.7 | 28 | |
| Entity Linking | AIDA (testa) | Micro F179.7 | 23 | |
| Entity Linking | TAC-KBP 2010 (test) | Accuracy94.9 | 16 | |
| Entity Linking | AIDA-B (test) | Micro F10.767 | 12 | |
| Entity Disambiguation | CoNLL table P (test) | Accuracy96.7 | 7 | |
| End-to-end Entity Linking | CoNLL (test) | Micro F176.7 | 7 | |
| End-to-end Entity Linking | CoNLL (dev) | Micro F179.7 | 7 | |
| Entity Disambiguation | CoNLL alias table H (test) | Accuracy92.5 | 5 |
Showing 10 of 11 rows