Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment

About

Recent advances in CV and NLP have inspired researchers to develop general-purpose graph foundation models through pre-training across diverse domains. However, a fundamental challenge arises from the substantial differences in graph topologies across domains. Additionally, real-world graphs are often sparse and prone to noisy connections and adversarial attacks. To address these issues, we propose the Multi-Domain Graph Foundation Model (MDGFM), a unified framework that aligns and leverages cross-domain topological information to facilitate robust knowledge transfer. MDGFM bridges different domains by adaptively balancing features and topology while refining original graphs to eliminate noise and align topological structures. To further enhance knowledge transfer, we introduce an efficient prompt-tuning approach. By aligning topologies, MDGFM not only improves multi-domain pre-training but also enables robust knowledge transfer to unseen domains. Theoretical analyses provide guarantees of MDGFM's effectiveness and domain generalization capabilities. Extensive experiments on both homophilic and heterophilic graph datasets validate the robustness and efficacy of our method.

Shuo Wang, Bokui Wang, Zhixiang Shen, Boyan Deng, Zhao Kang• 2025

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy65.1
1215
Graph ClassificationPROTEINS
Accuracy65.95
994
Node ClassificationCora (test)
Mean Accuracy65.8
861
Node ClassificationWisconsin
Accuracy47.46
627
Node ClassificationTexas
Accuracy0.4833
616
Node Classificationogbn-arxiv (test)
Accuracy59.2
433
Node ClassificationPubmed
Accuracy58.4
396
Node ClassificationCiteseer
Accuracy55.8
393
Node ClassificationwikiCS
Accuracy60.5
317
Node ClassificationarXiv
Accuracy32.28
219
Showing 10 of 61 rows

Other info

Follow for update