Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Graph Barlow Twins: A self-supervised representation learning framework for graphs

About

The self-supervised learning (SSL) paradigm is an essential exploration area, which tries to eliminate the need for expensive data labeling. Despite the great success of SSL methods in computer vision and natural language processing, most of them employ contrastive learning objectives that require negative samples, which are hard to define. This becomes even more challenging in the case of graphs and is a bottleneck for achieving robust representations. To overcome such limitations, we propose a framework for self-supervised graph representation learning - Graph Barlow Twins, which utilizes a cross-correlation-based loss function instead of negative samples. Moreover, it does not rely on non-symmetric neural network architectures - in contrast to state-of-the-art self-supervised graph representation learning method BGRL. We show that our method achieves as competitive results as the best self-supervised methods and fully supervised ones while requiring fewer hyperparameters and substantially shorter computation time (ca. 30 times faster than BGRL).

Piotr Bielak, Tomasz Kajdanowicz, Nitesh V. Chawla• 2021

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy81
1215
Node ClassificationCiteseer
Accuracy70.8
931
Node ClassificationCora (test)
Mean Accuracy84.89
861
Node ClassificationCiteseer (test)
Accuracy0.7659
824
Node ClassificationPubmed
Accuracy79
819
Node ClassificationPubMed (test)
Accuracy86.1
546
Node Classificationogbn-arxiv (test)
Accuracy70.1
433
Node ClassificationChameleon (test)
Mean Accuracy68.77
297
Node ClassificationCornell (test)
Mean Accuracy59.18
274
Node ClassificationTexas (test)
Mean Accuracy72.79
269
Showing 10 of 34 rows

Other info

Follow for update