Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fixed Aggregation Features Can Rival GNNs

About

Graph neural networks (GNNs) are widely believed to excel at node representation learning through trainable neighborhood aggregations. We challenge this view by introducing Fixed Aggregation Features (FAFs), a training-free approach that transforms graph learning tasks into tabular problems. This simple shift enables the use of well-established tabular methods, offering strong interpretability and the flexibility to deploy diverse classifiers. Across 14 benchmarks, well-tuned multilayer perceptrons trained on FAFs rival or outperform state-of-the-art GNNs and graph transformers on 12 tasks -- often using only mean aggregation. The only exceptions are the Roman Empire and Minesweeper datasets, which typically require unusually deep GNNs. To explain the theoretical possibility of non-trainable aggregations, we connect our findings to Kolmogorov-Arnold representations and discuss when mean aggregation can be sufficient. In conclusion, our results call for (i) richer benchmarks benefiting from learning diverse neighborhood aggregations, (ii) strong tabular baselines as standard, and (iii) employing and advancing tabular models for graph data to gain new insights into related tasks.

Celia Rubio-Madrigal, Rebekka Burkholz• 2026

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora (test)
Mean Accuracy82.84
861
Node ClassificationCiteseer (test)
Accuracy0.7048
824
Node ClassificationPubMed (test)
Accuracy80.96
546
Node ClassificationChameleon (test)
Mean Accuracy42.96
297
Node ClassificationSquirrel (test)
Mean Accuracy44.59
267
Node ClassificationCoauthor-CS (test)
Accuracy95.37
83
Node ClassificationAmazon Computer (test)
Accuracy94.01
76
Node ClassificationWiki-CS (test)
Accuracy80.25
75
Node ClassificationAmazon Photo (test)
Accuracy96.54
74
Node ClassificationAmazon-Ratings (test)
Accuracy55.09
51
Showing 10 of 14 rows

Other info

Follow for update