Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MLaGA: Multimodal Large Language and Graph Assistant

About

Large Language Models (LLMs) have demonstrated substantial efficacy in advancing graph-structured data analysis. Prevailing LLM-based graph methods excel in adapting LLMs to text-rich graphs, wherein node attributes are text descriptions. However, their applications to multimodal graphs--where nodes are associated with diverse attribute types, such as texts and images--remain underexplored, despite their ubiquity in real-world scenarios. To bridge the gap, we introduce the Multimodal Large Language and Graph Assistant (MLaGA), an innovative model that adeptly extends LLM capabilities to facilitate reasoning over complex graph structures and multimodal attributes. We first design a structure-aware multimodal encoder to align textual and visual attributes within a unified space through a joint graph pre-training objective. Subsequently, we implement a multimodal instruction-tuning approach to seamlessly integrate multimodal features and graph structures into the LLM through lightweight projectors. Extensive experiments across multiple datasets demonstrate the effectiveness of MLaGA compared to leading baseline methods, achieving superior performance in diverse graph learning tasks under both supervised and transfer learning scenarios.

Dongzhe Fan, Yi Fang, Jiajin Liu, Djellel Difallah, Qiaoyu Tan• 2025

Related benchmarks

TaskDatasetResultRank
Graph-to-ImageSemArt
CLIP-S Score68.52
14
Graph-to-TextFlickr30K
BLEU-49.54
14
Node ClusteringGrocery
NMI51.92
14
Node ClassificationGoodreads
Accuracy66.42
14
Modal RetrievalEle-fashion
MRR87.65
14
Node ClusteringRedditS
NMI82.58
14
Link PredictionCloth
MRR49.12
14
Node ClassificationMovies
Accuracy48.37
14
Showing 8 of 8 rows

Other info

Follow for update