MEAformer: Multi-modal Entity Alignment Transformer for Meta Modality Hybrid
About
Multi-modal entity alignment (MMEA) aims to discover identical entities across different knowledge graphs (KGs) whose entities are associated with relevant images. However, current MMEA algorithms rely on KG-level modality fusion strategies for multi-modal entity representation, which ignores the variations of modality preferences of different entities, thus compromising robustness against noise in modalities such as blurry images and relations. This paper introduces MEAformer, a multi-modal entity alignment transformer approach for meta modality hybrid, which dynamically predicts the mutual correlation coefficients among modalities for more fine-grained entity-level modality fusion and alignment. Experimental results demonstrate that our model not only achieves SOTA performance in multiple training scenarios, including supervised, unsupervised, iterative, and low-resource settings, but also has a limited number of parameters, efficient runtime, and interpretability. Our code is available at https://github.com/zjukg/MEAformer.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Entity Alignment | DBP15K FR-EN | Hits@10.996 | 158 | |
| Entity Alignment | DBP15K JA-EN (test) | Hits@199.1 | 149 | |
| Entity Alignment | DBP15K ZH-EN | H@197.3 | 143 | |
| Entity Alignment | DBP15K ZH-EN (test) | Hits@197.3 | 134 | |
| Entity Alignment | DBP15K FR-EN (test) | Hits@199.6 | 133 | |
| Entity Alignment | DBP15K JA-EN | Hits@10.977 | 126 | |
| Entity Alignment | DBP JA-EN 15K | Hits@199.1 | 40 | |
| Entity Alignment | FB15K-YAGO15K (50% train) | Hits@10.612 | 24 | |
| Entity Alignment | FB15K-DB15K 50% (train) | Hits@169 | 24 | |
| Entity Alignment | FB15K-DB15K (20% train) | Hits@10.578 | 18 |