A Cross-graph Tuning-free GNN Prompting Framework
About
GNN prompting aims to adapt models across tasks and graphs without requiring extensive retraining. However, most existing graph prompt methods still require task-specific parameter updates and face the issue of generalizing across graphs, limiting their performance and undermining the core promise of prompting. In this work, we introduce a Cross-graph Tuning-free Prompting Framework (CTP), which supports both homogeneous and heterogeneous graphs, can be directly deployed to unseen graphs without further parameter tuning, and thus enables a plug-and-play GNN inference engine. Extensive experiments on few-shot prediction tasks show that, compared to SOTAs, CTP achieves an average accuracy gain of 30.8% and a maximum gain of 54%, confirming its effectiveness and offering a new perspective on graph prompt learning.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Link Prediction | FB15k-237 (test) | -- | 419 | |
| Node Classification | arXiv | Accuracy66.09 | 219 | |
| Link Prediction | NELL (test) | Accuracy84.32 | 40 | |
| Link Prediction | ConceptNet (test) | Accuracy46.3 | 10 |