Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Domain Graph Foundation Models: Robust Knowledge Transfer via Topology Alignment

About

Recent advances in CV and NLP have inspired researchers to develop general-purpose graph foundation models through pre-training across diverse domains. However, a fundamental challenge arises from the substantial differences in graph topologies across domains. Additionally, real-world graphs are often sparse and prone to noisy connections and adversarial attacks. To address these issues, we propose the Multi-Domain Graph Foundation Model (MDGFM), a unified framework that aligns and leverages cross-domain topological information to facilitate robust knowledge transfer. MDGFM bridges different domains by adaptively balancing features and topology while refining original graphs to eliminate noise and align topological structures. To further enhance knowledge transfer, we introduce an efficient prompt-tuning approach. By aligning topologies, MDGFM not only improves multi-domain pre-training but also enables robust knowledge transfer to unseen domains. Theoretical analyses provide guarantees of MDGFM's effectiveness and domain generalization capabilities. Extensive experiments on both homophilic and heterophilic graph datasets validate the robustness and efficacy of our method.

Shuo Wang, Bokui Wang, Zhixiang Shen, Boyan Deng, Zhao Kang• 2025

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy44.83
885
Node ClassificationPubmed
Accuracy58.4
307
Node ClassificationCiteseer
Accuracy55.8
275
Node ClassificationwikiCS
Accuracy53.9
198
Node ClassificationOGBN-Products
Accuracy54.7
62
Node ClassificationCora
Accuracy66
38
Graph ClassificationPubmed
Accuracy67.1
31
Graph ClassificationCiteseer
Accuracy60.8
29
Supervised Graph ClassificationCora
Accuracy69.4
26
Graph ClassificationOGBN-Products
Accuracy59.8
26
Showing 10 of 15 rows

Other info

Follow for update