Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Enhancing Graph Transformers with Hierarchical Distance Structural Encoding

About

Graph transformers need strong inductive biases to derive meaningful attention scores. Yet, current methods often fall short in capturing longer ranges, hierarchical structures, or community structures, which are common in various graphs such as molecules, social networks, and citation networks. This paper presents a Hierarchical Distance Structural Encoding (HDSE) method to model node distances in a graph, focusing on its multi-level, hierarchical nature. We introduce a novel framework to seamlessly integrate HDSE into the attention mechanism of existing graph transformers, allowing for simultaneous application with other positional encodings. To apply graph transformers with HDSE to large-scale graphs, we further propose a high-level HDSE that effectively biases the linear transformers towards graph hierarchies. We theoretically prove the superiority of HDSE over shortest path distances in terms of expressivity and generalization. Empirically, we demonstrate that graph transformers with HDSE excel in graph classification, regression on 7 graph-level datasets, and node classification on 11 large-scale graphs, including those with up to a billion nodes.

Yuankai Luo, Hongkang Li, Lei Shi, Xiao-Ming Wu• 2023

Related benchmarks

TaskDatasetResultRank
Node ClassificationCiteseer (test)
Accuracy0.7831
729
Node ClassificationCora (test)
Mean Accuracy89.67
687
Node ClassificationChameleon
Accuracy46
549
Node ClassificationPubMed (test)
Accuracy90.63
500
Node ClassificationSquirrel
Accuracy43.2
500
Node ClassificationActor
Accuracy38
237
Graph RegressionZINC (test)
MAE0.059
204
Graph ClassificationCIFAR10 (test)
Test Accuracy76.473
139
Node ClassificationCLUSTER (test)
Test Accuracy80.026
113
Graph ClassificationMNIST (test)
Accuracy98.424
110
Showing 10 of 20 rows

Other info

Code

Follow for update