Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distilling Self-Knowledge From Contrastive Links to Classify Graph Nodes Without Passing Messages

About

Nowadays, Graph Neural Networks (GNNs) following the Message Passing paradigm become the dominant way to learn on graphic data. Models in this paradigm have to spend extra space to look up adjacent nodes with adjacency matrices and extra time to aggregate multiple messages from adjacent nodes. To address this issue, we develop a method called LinkDist that distils self-knowledge from connected node pairs into a Multi-Layer Perceptron (MLP) without the need to aggregate messages. Experiment with 8 real-world datasets shows the MLP derived from LinkDist can predict the label of a node without knowing its adjacencies but achieve comparable accuracy against GNNs in the contexts of semi- and full-supervised node classification. Moreover, LinkDist benefits from its Non-Message Passing paradigm that we can also distil self-knowledge from arbitrarily sampled node pairs in a contrastive way to further boost the performance of LinkDist.

Yi Luo, Aiguo Chen, Ke Yan, Ling Tian• 2021

Related benchmarks

TaskDatasetResultRank
Transductive Node ClassificationPubmed (transductive)
Accuracy89.58
95
Node ClassificationCora Full
Accuracy70.32
88
Node ClassificationCora (60/20/20 random split)
Accuracy70.32
74
Node ClassificationPubmed (60/20/20 random split)
Accuracy89.58
31
Node ClassificationCoauthor CS (semi-supervised inductive)
Accuracy92.2
23
Node ClassificationCITESEER inductive setting (test)
Accuracy71.74
21
Node ClassificationCoauthor CS (semi-supervised transductive)
Accuracy91.88
19
Node ClassificationCora transductive (full-supervised)
Accuracy88.24
14
Node ClassificationCitesee full-supervised transductive
Accuracy75.79
7
Node ClassificationAmazon Photo full-supervised transductive
Accuracy94.36
7
Showing 10 of 30 rows

Other info

Code

Follow for update