Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Noise-powered Multi-modal Knowledge Graph Representation Framework

About

The rise of Multi-modal Pre-training highlights the necessity for a unified Multi-Modal Knowledge Graph (MMKG) representation learning framework. Such a framework is essential for embedding structured knowledge into multi-modal Large Language Models effectively, alleviating issues like knowledge misconceptions and multi-modal hallucinations. In this work, we explore the efficacy of models in accurately embedding entities within MMKGs through two pivotal tasks: Multi-modal Knowledge Graph Completion (MKGC) and Multi-modal Entity Alignment (MMEA). Building on this foundation, we propose a novel SNAG method that utilizes a Transformer-based architecture equipped with modality-level noise masking to robustly integrate multi-modal entity features in KGs. By incorporating specific training objectives for both MKGC and MMEA, our approach achieves SOTA performance across a total of ten datasets, demonstrating its versatility. Moreover, SNAG can not only function as a standalone model but also enhance other existing methods, providing stable performance improvements. Code and data are available at https://github.com/zjukg/SNAG.

Zhuo Chen, Yin Fang, Yichi Zhang, Lingbing Guo, Jiaoyan Chen, Jeff Z. Pan, Huajun Chen, Wen Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Knowledge Graph CompletionMKG-W
MRR0.373
22
Knowledge Graph CompletionMKG-Y
MRR39.1
22
Knowledge Graph CompletionOverall DB15K, MKG-W, MKG-Y
MRR37.57
22
Knowledge Graph CompletionDB15K
MRR36.3
22
Showing 4 of 4 rows

Other info

Follow for update