Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on Graphs

About

The rapid development of Multimodal Large Language Models (MLLMs) has enabled the integration of multiple modalities, including texts and images, within the large language model (LLM) framework. However, texts and images are usually interconnected, forming a multimodal attributed graph (MMAG). It is underexplored how MLLMs can incorporate the relational information (\textit{i.e.}, graph structure) and semantic information (\textit{i.e.,} texts and images) on such graphs for multimodal comprehension and generation. In this paper, we propose GraphGPT-o, which supports omni-multimodal understanding and creation on MMAGs. We first comprehensively study linearization variants to transform semantic and structural information as input for MLLMs. Then, we propose a hierarchical aligner that enables deep graph encoding, bridging the gap between MMAGs and MLLMs. Finally, we explore the inference choices, adapting MLLM to interleaved text and image generation in graph scenarios. Extensive experiments on three datasets from different domains demonstrate the effectiveness of our proposed method. Datasets and codes will be open-sourced upon acceptance.

Yi Fang, Bowen Jin, Jiacheng Shen, Sirui Ding, Qiaoyu Tan, Jiawei Han• 2025

Related benchmarks

TaskDatasetResultRank
Graph-to-ImageSemArt
CLIP-S Score70.84
14
Graph-to-TextFlickr30K
BLEU-49.89
14
Link PredictionCloth
MRR51.43
14
Modal RetrievalEle-fashion
MRR88.45
14
Node ClassificationMovies
Accuracy49.12
14
Node ClassificationGoodreads
Accuracy64.25
14
Node ClusteringGrocery
NMI50.64
14
Node ClusteringRedditS
NMI79.33
14
Multimodal GenerationART500K (test)
CLIP I2 Score77.62
6
Multimodal GenerationAmazon Beauty (test)
CLIP-I263.46
6
Showing 10 of 11 rows

Other info

Follow for update