Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can Graph Neural Networks Learn Language with Extremely Weak Text Supervision?

About

While great success has been achieved in building vision models with Contrastive Language-Image Pre-training (CLIP) over internet-scale image-text pairs, building transferable Graph Neural Networks (GNNs) with CLIP pipeline is challenging because of the scarcity of labeled data and text supervision, different levels of downstream tasks, and the conceptual gaps between domains. In this work, to address these issues, we propose a multi-modal prompt learning paradigm to effectively adapt pre-trained GNN to downstream tasks and data, given only a few semantically labeled samples, each with extremely weak text supervision. Our new paradigm embeds the graphs directly in the same space as the Large Language Models (LLMs) by learning both graph prompts and text prompts simultaneously. We demonstrate the superior performance of our paradigm in few-shot, multi-task-level, and cross-domain settings. Moreover, we build the first CLIP-style zero-shot classification prototype that can generalize GNNs to unseen classes with extremely weak text supervision. The code is available at https://github.com/Violet24K/Morpher.

Zihao Li, Lecheng Zheng, Bowen Jin, Dongqi Fu, Baoyu Jing, Yikun Ban, Jingrui He, Jiawei Han• 2024

Related benchmarks

TaskDatasetResultRank
Graph ClassificationPROTEINS
Accuracy73.53
742
Graph ClassificationMUTAG
Accuracy79.33
697
Graph ClassificationMutag (test)
Accuracy76.67
217
Molecular Property ClassificationMoleculeNet BACE
ROC AUC68.58
36
Graph-level classificationMUTAG (target)
Accuracy76.67
10
Node-level classificationPubMed (target)
Accuracy58.29
10
Graph ClassificationMoleculeNet tox21
AUC-ROC0.7459
8
Graph ClassificationMoleculeNet HIV
AUC ROC72.83
8
Graph ClassificationMSRC 21C
Accuracy50.85
7
Edge classificationCora
Acc55.71
5
Showing 10 of 13 rows

Other info

Follow for update