Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Benchmarking Diverse-Modal Entity Linking with Generative Models

About

Entities can be expressed in diverse formats, such as texts, images, or column names and cell values in tables. While existing entity linking (EL) models work well on per modality configuration, such as text-only EL, visual grounding, or schema linking, it is more challenging to design a unified model for diverse modality configurations. To bring various modality configurations together, we constructed a benchmark for diverse-modal EL (DMEL) from existing EL datasets, covering all three modalities including text, image, and table. To approach the DMEL task, we proposed a generative diverse-modal model (GDMM) following a multimodal-encoder-decoder paradigm. Pre-training \Model with rich corpora builds a solid foundation for DMEL without storing the entire KB for inference. Fine-tuning GDMM builds a stronger DMEL baseline, outperforming state-of-the-art task-specific EL models by 8.51 F1 score on average. Additionally, extensive error analyses are conducted to highlight the challenges of DMEL, facilitating future research on this task.

Sijia Wang, Alexander Hanbo Li, Henry Zhu, Sheng Zhang, Chung-Wei Hang, Pramuditha Perera, Jie Ma, William Wang, Zhiguo Wang, Vittorio Castelli, Bing Xiang, Patrick Ng• 2023

Related benchmarks

TaskDatasetResultRank
Schema linkingSLSQL (test)
F1 Score84.43
9
Schema linkingSquall (test)
F1 Score89.69
3
Visual Entity DisambiguationWikiDiverse (test)
F1 Score (%)79.1
3
Visual Entity DisambiguationMELBench (test)
F1 Score72.41
3
Entity DisambiguationGERBIL (test)
F1 Score86.11
3
Entity LinkingMELBench--
1
Showing 6 of 6 rows

Other info

Follow for update