Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-modal Contrastive Representation Learning for Entity Alignment

About

Multi-modal entity alignment aims to identify equivalent entities between two different multi-modal knowledge graphs, which consist of structural triples and images associated with entities. Most previous works focus on how to utilize and encode information from different modalities, while it is not trivial to leverage multi-modal knowledge in entity alignment because of the modality heterogeneity. In this paper, we propose MCLEA, a Multi-modal Contrastive Learning based Entity Alignment model, to obtain effective joint representations for multi-modal entity alignment. Different from previous works, MCLEA considers task-oriented modality and models the inter-modal relationships for each entity representation. In particular, MCLEA firstly learns multiple individual representations from multiple modalities, and then performs contrastive learning to jointly model intra-modal and inter-modal interactions. Extensive experimental results show that MCLEA outperforms state-of-the-art baselines on public datasets under both supervised and unsupervised settings.

Zhenxi Lin, Ziheng Zhang, Meng Wang, Yinghui Shi, Xian Wu, Yefeng Zheng• 2022

Related benchmarks

TaskDatasetResultRank
Entity AlignmentDBP15K FR-EN
Hits@10.995
158
Entity AlignmentDBP15K JA-EN (test)
Hits@198.6
149
Entity AlignmentDBP15K ZH-EN
H@196
143
Entity AlignmentDBP15K ZH-EN (test)
Hits@197.2
134
Entity AlignmentDBP15K FR-EN (test)
Hits@199.7
133
Entity AlignmentDBP15K JA-EN
Hits@10.983
126
Entity AlignmentFB15K-DB15K 50% (train)
Hits@157.3
24
Entity AlignmentFB15K-YAGO15K (50% train)
Hits@10.543
24
Entity AlignmentOpenEA D-W V2 1.0 (test)
H@196.9
22
Entity AlignmentDBP15K FR-EN v1 (test)
Hits@199.7
20
Showing 10 of 31 rows

Other info

Code

Follow for update