Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Fixed Aggregation Features Can Rival GNNs

About

Graph neural networks (GNNs) are widely believed to excel at node representation learning through trainable neighborhood aggregations. We challenge this view by introducing Fixed Aggregation Features (FAFs), a training-free approach that transforms graph learning tasks into tabular problems. This simple shift enables the use of well-established tabular methods, offering strong interpretability and the flexibility to deploy diverse classifiers. Across 14 benchmarks, well-tuned multilayer perceptrons trained on FAFs rival or outperform state-of-the-art GNNs and graph transformers on 12 tasks -- often using only mean aggregation. The only exceptions are the Roman Empire and Minesweeper datasets, which typically require unusually deep GNNs. To explain the theoretical possibility of non-trainable aggregations, we connect our findings to Kolmogorov-Arnold representations and discuss when mean aggregation can be sufficient. In conclusion, our results call for (i) richer benchmarks benefiting from learning diverse neighborhood aggregations, (ii) strong tabular baselines as standard, and (iii) employing and advancing tabular models for graph data to gain new insights into related tasks.

Celia Rubio-Madrigal, Rebekka Burkholz• 2026

Related benchmarks

TaskDatasetResultRank
Node ClassificationCiteseer (test)
Accuracy0.7048
729
Node ClassificationCora (test)
Mean Accuracy82.84
687
Node ClassificationPubMed (test)
Accuracy80.96
500
Node ClassificationSquirrel (test)
Mean Accuracy44.59
234
Node ClassificationChameleon (test)
Mean Accuracy42.96
230
Node ClassificationAmazon Computer (test)
Accuracy94.01
76
Node ClassificationAmazon Photo (test)
Accuracy96.54
60
Node ClassificationCoauthor-CS (test)
Accuracy95.37
47
Node ClassificationAmazon-Ratings (test)
Accuracy55.09
37
Node ClassificationWiki-CS (test)
Accuracy80.25
18
Showing 10 of 14 rows

Other info

Follow for update