Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Even Sparser Graph Transformers

About

Graph Transformers excel in long-range dependency modeling, but generally require quadratic memory complexity in the number of nodes in an input graph, and hence have trouble scaling to large graphs. Sparse attention variants such as Exphormer can help, but may require high-degree augmentations to the input graph for good performance, and do not attempt to sparsify an already-dense input graph. As the learned attention mechanisms tend to use few of these edges, such high-degree connections may be unnecessary. We show (empirically and with theoretical backing) that attention scores on graphs are usually quite consistent across network widths, and use this observation to propose a two-stage procedure, which we call Spexphormer: first, train a narrow network on the full augmented graph. Next, use only the active connections to train a wider network on a much sparser graph. We establish theoretical conditions when a narrow network's attention scores can match those of a wide network, and show that Spexphormer achieves good performance with drastically reduced memory requirements on various graph datasets.

Hamed Shirzad, Honghao Lin, Balaji Venkatachalam, Ameya Velingker, David Woodruff, Danica Sutherland• 2024

Related benchmarks

TaskDatasetResultRank
Node ClassificationActor
Accuracy38.59
397
Node ClassificationPhoto
Mean Accuracy95.33
343
Node ClassificationwikiCS
Accuracy78.2
317
Node ClassificationRoman-Empire
Accuracy87.54
206
Node ClassificationPubmed
Accuracy89.87
178
Node Classificationamazon-ratings
Accuracy50.48
173
Node ClassificationOgbn-arxiv
Accuracy70.82
170
Node ClassificationPhysics
Accuracy96.7
145
Node ClassificationCS
Accuracy95
144
Node ClassificationComputer
Accuracy91.09
89
Showing 10 of 20 rows

Other info

Code

Follow for update