Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Graph4MM: Weaving Multimodal Learning with Structural Information

About

Real-world multimodal data usually exhibit complex structural relationships beyond traditional one-to-one mappings like image-caption pairs. Entities across modalities interact in intricate ways, with images and text forming diverse interconnections through contextual dependencies and co-references. Graphs provide powerful structural information for modeling intra-modal and inter-modal relationships. However, previous works fail to distinguish multi-hop neighbors and treat the graph as a standalone modality, which fragments the overall understanding. This limitation presents two key challenges in multimodal learning: (1) integrating structural information from multi-hop neighbors into foundational models, and (2) fusing modality-specific information in a principled manner. To address these challenges, we revisit the role of graphs in multimodal learning within the era of foundation models and propose Graph4MM, a graph-based multimodal learning framework. To be specific, we introduce Hop-Diffused Attention, which integrates multi-hop structural information into self-attention through causal masking and hop diffusion. Furthermore, we design MM-QFormer, a multi-mapping querying transformer for cross-modal fusion. Through theoretical and empirical analysis, we show that leveraging structures to integrate both intra- and inter-modal interactions improves multimodal understanding beyond treating them as a standalone modality. Experiments on both generative and discriminative tasks show that Graph4MM outperforms larger VLMs, LLMs, and multimodal graph baselines, achieving a 6.93% average improvement.

Xuying Ning, Dongqi Fu, Tianxin Wei, Wujiang Xu, Jingrui He• 2025

Related benchmarks

TaskDatasetResultRank
Node ClassificationMovies
Accuracy55.48
47
Node ClusteringRedditS
NMI84.14
31
Modal RetrievalEle-fashion
MRR86.25
31
Link PredictionBili Dance
MRR38.48
27
Node ClassificationGrocery
Accuracy83.57
21
G2TextFlickr30K
BLEU-410.15
17
Link PredictionDY
MRR74.31
17
G2ImageSemArt
CLIP Similarity (CLIP-S)66.85
17
Node ClusteringToys
NMI46.74
17
Node ClassificationToys
Accuracy78.91
14
Showing 10 of 17 rows

Other info

Follow for update