Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

k-Maximum Inner Product Attention for Graph Transformers and the Expressive Power of GraphGPS

About

Graph transformers have shown promise in overcoming limitations of traditional graph neural networks, such as oversquashing and difficulties in modeling long-range dependencies. However, their application to large-scale graphs is hindered by the quadratic memory and computational complexity of the all-to-all attention mechanism. Although alternatives such as linearized attention and restricted attention patterns have been proposed, these often degrade performance or limit expressive power. To better balance efficiency and effectiveness, we introduce k-Maximum Inner Product (k-MIP) attention for graph transformers. k-MIP attention selects the most relevant key nodes per query via a top-k operation, yielding a sparse yet flexible attention pattern. Combined with an attention score computation based on symbolic matrices, this results in linear memory complexity and practical speedups of up to an order of magnitude compared to all-to-all attention, enabling the processing of graphs with over 500k nodes on a single A100 GPU. We provide a theoretical analysis of expressive power, showing that k-MIP attention does not compromise the expressiveness of graph transformers: specifically, we prove that k-MIP transformers can approximate any full-attention transformer to arbitrary precision. In addition, we analyze the expressive power of the GraphGPS framework, in which we integrate our attention mechanism, and establish an upper bound on its graph distinguishing capability in terms of the S-SEG-WL test. Finally, we validate our approach on the Long Range Graph Benchmark, the City-Networks benchmark, and two custom large-scale inductive point cloud datasets, consistently ranking among the top-performing scalable graph transformers.

Jonas De Schouwer, Haitz S\'aez de Oc\'ariz Borde, Xiaowen Dong• 2026

Related benchmarks

TaskDatasetResultRank
Graph RegressionPeptides struct LRGB (test)
MAE0.2562
187
Graph ClassificationPeptides-func LRGB (test)
AP0.6627
145
Node ClassificationPascalVOC-SP LRGB (test)
F1 Score39.69
60
Node ClassificationCOCO-SP LRGB (test)
F1 Score35.56
33
Point Cloud Semantic SegmentationS3DIS (test)
mIoU (%)67.99
9
Point Cloud SegmentationShapeNet Part (test)
F1 Score82.68
8
Node ClassificationCity-Networks Paris (test)
Accuracy53.62
8
Node ClassificationCity-Networks Shanghai (test)
Accuracy66.94
8
Graph LearningCity-Networks Paris
Training Time (h)3.09
8
Graph LearningCity-Networks Shanghai
Training Time (h)6.45
8
Showing 10 of 14 rows

Other info

Follow for update