Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Semantic Consistency: Dirichlet Energy Driven Robust Multi-Modal Entity Alignment

About

In Multi-Modal Knowledge Graphs (MMKGs), Multi-Modal Entity Alignment (MMEA) is crucial for identifying identical entities across diverse modal attributes. However, semantic inconsistency, mainly due to missing modal attributes, poses a significant challenge. Traditional approaches rely on attribute interpolation, but this often introduces modality noise, distorting the original semantics. Moreover, the lack of a universal theoretical framework limits advancements in achieving semantic consistency. This study introduces a novel approach, DESAlign, which addresses these issues by applying a theoretical framework based on Dirichlet energy to ensure semantic consistency. We discover that semantic inconsistency leads to model overfitting to modality noise, causing performance fluctuations, particularly when modalities are missing. DESAlign innovatively combats over-smoothing and interpolates absent semantics using existing modalities. Our approach includes a multi-modal knowledge graph learning strategy and a propagation technique that employs existing semantic features to compensate for missing ones, providing explicit Euler solutions. Comprehensive evaluations across 60 benchmark splits, including monolingual and bilingual scenarios, demonstrate that DESAlign surpasses existing methods, setting a new standard in performance. Further testing with high rates of missing modalities confirms its robustness, offering an effective solution to semantic inconsistency in real-world MMKGs.

Yuanyi Wang, Haifeng Sun, Jiabo Wang, Jingyu Wang, Wei Tang, Qi Qi, Shaoling Sun, Jianxin Liao• 2024

Related benchmarks

TaskDatasetResultRank
Multimodal Entity AlignmentDBP15K ZH-EN
H@182.6
11
Multimodal Entity AlignmentDBP15K JA-EN
Hits@181.1
11
Multimodal Entity AlignmentDBP15K FR-EN
H@181
11
Showing 3 of 3 rows

Other info

Follow for update