Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Attribute-Consistent Knowledge Graph Representation Learning for Multi-Modal Entity Alignment

About

The multi-modal entity alignment (MMEA) aims to find all equivalent entity pairs between multi-modal knowledge graphs (MMKGs). Rich attributes and neighboring entities are valuable for the alignment task, but existing works ignore contextual gap problems that the aligned entities have different numbers of attributes on specific modality when learning entity representations. In this paper, we propose a novel attribute-consistent knowledge graph representation learning framework for MMEA (ACK-MMEA) to compensate the contextual gaps through incorporating consistent alignment knowledge. Attribute-consistent KGs (ACKGs) are first constructed via multi-modal attribute uniformization with merge and generate operators so that each entity has one and only one uniform feature in each modality. The ACKGs are then fed into a relation-aware graph neural network with random dropouts, to obtain aggregated relation representations and robust entity representations. In order to evaluate the ACK-MMEA facilitated for entity alignment, we specially design a joint alignment loss for both entity and attribute evaluation. Extensive experiments conducted on two benchmark datasets show that our approach achieves excellent performance compared to its competitors.

Qian Li, Shu Guo, Yangyifei Luo, Cheng Ji, Lihong Wang, Jiawei Sheng, Jianxin Li• 2023

Related benchmarks

TaskDatasetResultRank
Entity AlignmentFB15K-YAGO15K (50% train)
Hits@10.535
24
Entity AlignmentFB15K-DB15K 50% (train)
Hits@150.1
24
Entity AlignmentFB15K-DB15K (20% train)
Hits@10.304
18
Entity AlignmentFB15K-YAGO15K 80% seeds
H@10.676
14
Entity AlignmentFB15K-DB15K 80% seeds
Hits@10.682
14
Entity AlignmentFB15K-YAGO15K 20% seeds
H@10.289
14
Showing 6 of 6 rows

Other info

Follow for update