Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Recurrent Distance Filtering for Graph Representation Learning

About

Graph neural networks based on iterative one-hop message passing have been shown to struggle in harnessing the information from distant nodes effectively. Conversely, graph transformers allow each node to attend to all other nodes directly, but lack graph inductive bias and have to rely on ad-hoc positional encoding. In this paper, we propose a new architecture to reconcile these challenges. Our approach stems from the recent breakthroughs in long-range modeling provided by deep state-space models: for a given target node, our model aggregates other nodes by their shortest distances to the target and uses a linear RNN to encode the sequence of hop representations. The linear RNN is parameterized in a particular diagonal form for stable long-range signal propagation and is theoretically expressive enough to encode the neighborhood hierarchy. With no need for positional encoding, we empirically show that the performance of our model is comparable to or better than that of state-of-the-art graph transformers on various benchmarks, with a significantly reduced computational cost. Our code is open-source at https://github.com/skeletondyh/GRED.

Yuhui Ding, Antonio Orvieto, Bobby He, Thomas Hofmann• 2023

Related benchmarks

TaskDatasetResultRank
Graph RegressionPeptides struct LRGB (test)
MAE0.2455
178
Graph RegressionZINC 12K (test)
MAE0.077
164
Graph ClassificationCIFAR10 (test)
Test Accuracy76.853
139
Node ClassificationCLUSTER (test)
Test Accuracy78.495
113
Graph ClassificationMNIST (test)
Accuracy98.383
110
Node ClassificationPATTERN (test)
Test Accuracy86.759
88
Multilabel Graph ClassificationPeptides-func LRGB (test)
AP71.33
30
Showing 7 of 7 rows

Other info

Code

Follow for update