Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid

About

Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal entity representation, which ignores the variations of modality preferences of different entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment. Experimental results demonstrate that our model not only achieves SOTA performance in multiple training scenarios, including supervised, unsupervised, iterative, and low-resource settings, but also has a limited number of parameters, efficient runtime, and interpretability. Our code is available at https://github.com/zjukg/MEAformer.

Zhuo Chen, Jiaoyan Chen, Wen Zhang, Lingbing Guo, Yin Fang, Yufeng Huang, Yichi Zhang, Yuxia Geng, Jeff Z. Pan, Wenting Song, Huajun Chen• 2022

Related benchmarks

TaskDatasetResultRank
Entity AlignmentDBP15K FR-EN
Hits@10.996
158
Entity AlignmentDBP15K JA-EN (test)
Hits@199.1
149
Entity AlignmentDBP15K ZH-EN
H@197.3
143
Entity AlignmentDBP15K ZH-EN (test)
Hits@197.3
134
Entity AlignmentDBP15K FR-EN (test)
Hits@199.6
133
Entity AlignmentDBP15K JA-EN
Hits@10.977
126
Entity AlignmentDBP JA-EN 15K
Hits@199.1
40
Entity AlignmentFB15K-YAGO15K (50% train)
Hits@10.612
24
Entity AlignmentFB15K-DB15K 50% (train)
Hits@169
24
Entity AlignmentFB15K-DB15K (20% train)
Hits@10.578
18
Showing 10 of 26 rows

Other info

Code

Follow for update