LELA: an LLM-based Entity Linking Approach with Zero-Shot Domain Adaptation
About
Entity linking (mapping ambiguous mentions in text to entities in a knowledge base) is a foundational step in tasks such as knowledge graph construction, question-answering, and information extraction. Our method, LELA, is a modular coarse-to-fine approach that leverages the capabilities of large language models (LLMs), and works with different target domains, knowledge bases and LLMs, without any fine-tuning phase. Our experiments across various entity linking settings show that LELA is highly competitive with fine-tuned approaches, and substantially outperforms the non-fine-tuned ones.
Samy Haffoudhi, Fabian M. Suchanek, Nils Holzenberger• 2026
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Entity Disambiguation | ZELDA Benchmark (test) | AIDA-B84.2 | 35 | |
| Entity Linking | ZESHEL (test) | Macro Accuracy83.11 | 15 | |
| Entity Linking | WikilinksNED Unseen Mentions | Accuracy68.7 | 15 | |
| Entity Linking | ESCO (test) | Accuracy39.36 | 13 | |
| Acronym Disambiguation | GLADIS | Accuracy (General)80.1 | 10 |
Showing 5 of 5 rows