Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GRAPHGPT-O: Synergistic Multimodal Comprehension and Generation on Graphs

About

The rapid development of Multimodal Large Language Models (MLLMs) has enabled the integration of multiple modalities, including texts and images, within the large language model (LLM) framework. However, texts and images are usually interconnected, forming a multimodal attributed graph (MMAG). It is underexplored how MLLMs can incorporate the relational information (\textit{i.e.}, graph structure) and semantic information (\textit{i.e.,} texts and images) on such graphs for multimodal comprehension and generation. In this paper, we propose GraphGPT-o, which supports omni-multimodal understanding and creation on MMAGs. We first comprehensively study linearization variants to transform semantic and structural information as input for MLLMs. Then, we propose a hierarchical aligner that enables deep graph encoding, bridging the gap between MMAGs and MLLMs. Finally, we explore the inference choices, adapting MLLM to interleaved text and image generation in graph scenarios. Extensive experiments on three datasets from different domains demonstrate the effectiveness of our proposed method. Datasets and codes will be open-sourced upon acceptance.

Yi Fang, Bowen Jin, Jiacheng Shen, Sirui Ding, Qiaoyu Tan, Jiawei Han• 2025

Related benchmarks

TaskDatasetResultRank
Node ClassificationMovies
Accuracy52.48
47
Node ClusteringRedditS
NMI79.33
31
Modal RetrievalEle-fashion
MRR88.45
31
Link PredictionBili Dance
MRR37.22
27
Node ClassificationGrocery
Accuracy78.27
21
G2ImageSemArt
CLIP Similarity (CLIP-S)70.47
17
G2TextFlickr30K
BLEU-49.57
17
Link PredictionDY
MRR70.04
17
Node ClusteringToys
NMI45.34
17
Graph-to-ImageSemArt
CLIP-S Score70.84
14
Showing 10 of 17 rows

Other info

Follow for update