Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Graph Transformers without Positional Encodings

About

Recently, Transformers for graph representation learning have become increasingly popular, achieving state-of-the-art performance on a wide-variety of graph datasets, either alone or in combination with message-passing graph neural networks (MP-GNNs). Infusing graph inductive-biases in the innately structure-agnostic transformer architecture in the form of structural or positional encodings (PEs) is key to achieving these impressive results. However, designing such encodings is tricky and disparate attempts have been made to engineer such encodings including Laplacian eigenvectors, relative random-walk probabilities (RRWP), spatial encodings, centrality encodings, edge encodings etc. In this work, we argue that such encodings may not be required at all, provided the attention mechanism itself incorporates information about the graph structure. We introduce Eigenformer, a Graph Transformer employing a novel spectrum-aware attention mechanism cognizant of the Laplacian spectrum of the graph, and empirically show that it achieves performance competetive with SOTA Graph Transformers on a number of standard GNN benchmarks. Additionally, we theoretically prove that Eigenformer can express various graph structural connectivity matrices, which is particularly essential when learning over smaller graphs.

Ayush Garg• 2024

Related benchmarks

TaskDatasetResultRank
Graph RegressionZINC (test)
MAE0.077
204
Graph RegressionPeptides struct LRGB (test)
MAE0.2599
178
Graph ClassificationCIFAR10 (test)
Test Accuracy70.194
139
Node ClassificationCLUSTER (test)
Test Accuracy77.456
113
Graph ClassificationMNIST (test)
Accuracy98.362
110
Multilabel Graph ClassificationPeptides-func LRGB (test)
AP64.14
30
Node ClassificationPATTERN (test)
Weighted Accuracy86.738
14
Showing 7 of 7 rows

Other info

Follow for update