Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Graph Neural Networks with Learnable Structural and Positional Representations

About

Graph neural networks (GNNs) have become the standard learning architectures for graphs. GNNs have been applied to numerous domains ranging from quantum chemistry, recommender systems to knowledge graphs and natural language processing. A major issue with arbitrary graphs is the absence of canonical positional information of nodes, which decreases the representation power of GNNs to distinguish e.g. isomorphic nodes and other graph symmetries. An approach to tackle this issue is to introduce Positional Encoding (PE) of nodes, and inject it into the input layer, like in Transformers. Possible graph PE are Laplacian eigenvectors. In this work, we propose to decouple structural and positional representations to make easy for the network to learn these two essential properties. We introduce a novel generic architecture which we call LSPE (Learnable Structural and Positional Encodings). We investigate several sparse and fully-connected (Transformer-like) GNNs, and observe a performance increase for molecular datasets, from 1.79% up to 64.14% when considering learnable PE for both GNN classes.

Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson• 2021

Related benchmarks

TaskDatasetResultRank
Node Classificationogbn-arxiv (test)
Accuracy72.14
382
Graph Classificationogbg-molpcba (test)
AP28.4
206
Graph RegressionZINC (test)
MAE0.09
204
Graph RegressionPeptides struct LRGB (test)
MAE0.25
178
Graph RegressionZINC 12K (test)
MAE0.09
164
Graph ClassificationCIFAR10 (test)
Test Accuracy71.3
139
Graph ClassificationPeptides-func LRGB (test)
AP0.6069
136
Graph ClassificationMNIST (test)
Accuracy98.19
110
Graph ClassificationCIFAR10
Accuracy72.298
108
Graph RegressionZINC
MAE0.07
96
Showing 10 of 42 rows

Other info

Code

Follow for update