Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MLaGA: Multimodal Large Language and Graph Assistant

About

Large Language Models (LLMs) have demonstrated substantial efficacy in advancing graph-structured data analysis. Prevailing LLM-based graph methods excel in adapting LLMs to text-rich graphs, wherein node attributes are text descriptions. However, their applications to multimodal graphs--where nodes are associated with diverse attribute types, such as texts and images--remain underexplored, despite their ubiquity in real-world scenarios. To bridge the gap, we introduce the Multimodal Large Language and Graph Assistant (MLaGA), an innovative model that adeptly extends LLM capabilities to facilitate reasoning over complex graph structures and multimodal attributes. We first design a structure-aware multimodal encoder to align textual and visual attributes within a unified space through a joint graph pre-training objective. Subsequently, we implement a multimodal instruction-tuning approach to seamlessly integrate multimodal features and graph structures into the LLM through lightweight projectors. Extensive experiments across multiple datasets demonstrate the effectiveness of MLaGA compared to leading baseline methods, achieving superior performance in diverse graph learning tasks under both supervised and transfer learning scenarios.

Dongzhe Fan, Yi Fang, Jiajin Liu, Djellel Difallah, Qiaoyu Tan• 2025

Related benchmarks

TaskDatasetResultRank
Node ClassificationREDDIT
Accuracy91.45
192
Node ClassificationMovies
Accuracy56.25
47
Node ClusteringRedditS
NMI82.58
31
Modal RetrievalEle-fashion
MRR87.65
31
Link PredictionBili Dance
MRR39.14
27
Node ClassificationGrocery
Accuracy81.52
21
G2ImageSemArt
CLIP Similarity (CLIP-S)68.23
17
G2TextFlickr30K
BLEU-49.26
17
Node ClusteringToys
NMI49.2
17
Link PredictionDY
MRR72.11
17
Showing 10 of 27 rows

Other info

Follow for update