Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Classic GNNs are Strong Baselines: Reassessing GNNs for Node Classification

About

Graph Transformers (GTs) have recently emerged as popular alternatives to traditional message-passing Graph Neural Networks (GNNs), due to their theoretically superior expressiveness and impressive performance reported on standard node classification benchmarks, often significantly outperforming GNNs. In this paper, we conduct a thorough empirical analysis to reevaluate the performance of three classic GNN models (GCN, GAT, and GraphSAGE) against GTs. Our findings suggest that the previously reported superiority of GTs may have been overstated due to suboptimal hyperparameter configurations in GNNs. Remarkably, with slight hyperparameter tuning, these classic GNN models achieve state-of-the-art performance, matching or even exceeding that of recent GTs across 17 out of the 18 diverse datasets examined. Additionally, we conduct detailed ablation studies to investigate the influence of various GNN configurations, such as normalization, dropout, residual connections, and network depth, on node classification performance. Our study aims to promote a higher standard of empirical rigor in the field of graph machine learning, encouraging more accurate comparisons and evaluations of model capabilities.

Yuankai Luo, Lei Shi, Xiao-Ming Wu• 2024

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora
Accuracy85.1
885
Node ClassificationChameleon
Accuracy46.29
549
Node ClassificationSquirrel
Accuracy45.01
500
Node Classificationogbn-arxiv (test)
Accuracy73.4
382
Node ClassificationPubmed
Accuracy90.04
307
Node ClassificationCiteseer
Accuracy77.53
275
Node ClassificationwikiCS
Accuracy81.07
198
Graph RegressionPeptides struct LRGB (test)
MAE0.2512
178
Node ClassificationPhoto
Mean Accuracy96.27
165
Node ClassificationAmazon Photo
Accuracy96.78
150
Showing 10 of 34 rows

Other info

Code

Follow for update