Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

NativE: Multi-modal Knowledge Graph Completion in the Wild

About

Multi-modal knowledge graph completion (MMKGC) aims to automatically discover the unobserved factual knowledge from a given multi-modal knowledge graph by collaboratively modeling the triple structure and multi-modal information from entities. However, real-world MMKGs present challenges due to their diverse and imbalanced nature, which means that the modality information can span various types (e.g., image, text, numeric, audio, video) but its distribution among entities is uneven, leading to missing modalities for certain entities. Existing works usually focus on common modalities like image and text while neglecting the imbalanced distribution phenomenon of modal information. To address these issues, we propose a comprehensive framework NativE to achieve MMKGC in the wild. NativE proposes a relation-guided dual adaptive fusion module that enables adaptive fusion for any modalities and employs a collaborative modality adversarial training framework to augment the imbalanced modality information. We construct a new benchmark called WildKGC with five datasets to evaluate our method. The empirical results compared with 21 recent baselines confirm the superiority of our method, consistently achieving state-of-the-art performance across different datasets and various scenarios while keeping efficient and generalizable. Our code and data are released at https://github.com/zjukg/NATIVE

Yichi Zhang, Zhuo Chen, Lingbing Guo, Yajing Xu, Binbin Hu, Ziqi Liu, Wen Zhang, Huajun Chen• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge Graph CompletionMKG-Y
MRR39.21
22
Knowledge Graph CompletionMKG-W
MRR0.3684
22
Knowledge Graph CompletionDB15K
MRR34.3
22
Knowledge Graph CompletionOverall DB15K, MKG-W, MKG-Y
MRR36.78
22
Showing 4 of 4 rows

Other info

Follow for update