Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

All in One and One for All: A Simple yet Effective Method towards Cross-domain Graph Pretraining

About

Large Language Models (LLMs) have revolutionized the fields of computer vision (CV) and natural language processing (NLP). One of the most notable advancements of LLMs is that a single model is trained on vast and diverse datasets spanning multiple domains -- a paradigm we term `All in One'. This methodology empowers LLMs with super generalization capabilities, facilitating an encompassing comprehension of varied data distributions. Leveraging these capabilities, a single LLM demonstrates remarkable versatility across a variety of domains -- a paradigm we term `One for All'. However, applying this idea to the graph field remains a formidable challenge, with cross-domain pretraining often resulting in negative transfer. This issue is particularly important in few-shot learning scenarios, where the paucity of training data necessitates the incorporation of external knowledge sources. In response to this challenge, we propose a novel approach called Graph COordinators for PrEtraining (GCOPE), that harnesses the underlying commonalities across diverse graph datasets to enhance few-shot learning. Our novel methodology involves a unification framework that amalgamates disparate graph datasets during the pretraining phase to distill and transfer meaningful knowledge to target tasks. Extensive experiments across multiple graph datasets demonstrate the superior efficacy of our approach. By successfully leveraging the synergistic potential of multiple graph datasets for pretraining, our work stands as a pioneering contribution to the realm of graph foundational model.

Haihong Zhao, Aochuan Chen, Xiangguo Sun, Hong Cheng, Jia Li• 2024

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy62.5
1215
Graph ClassificationPROTEINS
Accuracy73.76
994
Node ClassificationCora (test)
Mean Accuracy58.7
861
Node ClassificationWisconsin
Accuracy43.74
627
Node ClassificationTexas
Accuracy0.4133
616
Node Classificationogbn-arxiv (test)
Accuracy52.8
433
Node ClassificationPubmed
Accuracy55.2
396
Node ClassificationCiteseer
Accuracy56.9
393
Node ClassificationwikiCS
Accuracy53.5
317
Node ClassificationarXiv
Accuracy39.45
219
Showing 10 of 75 rows
...

Other info

Follow for update