Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can TabPFN Compete with GNNs for Node Classification via Graph Tabularization?

About

Foundation models pretrained on large data have demonstrated remarkable zero-shot generalization capabilities across domains. Building on the success of TabPFN for tabular data and its recent extension to time series, we investigate whether graph node classification can be effectively reformulated as a tabular learning problem. We introduce TabPFN-GN, which transforms graph data into tabular features by extracting node attributes, structural properties, positional encodings, and optionally smoothed neighborhood features. This enables TabPFN to perform direct node classification without any graph-specific training or language model dependencies. Our experiments on 12 benchmark datasets reveal that TabPFN-GN achieves competitive performance with GNNs on homophilous graphs and consistently outperforms them on heterophilous graphs. These results demonstrate that principled feature engineering can bridge the gap between tabular and graph domains, providing a practical alternative to task-specific GNN training and LLM-dependent graph foundation models.

Jeongwhan Choi, Woosung Kang, Minseo Kim, Jongwoo Kim, Noseong Park• 2025

Related benchmarks

TaskDatasetResultRank
Node ClassificationCiteseer (test)
Accuracy0.7214
729
Node ClassificationCora (test)
Mean Accuracy81.98
687
Node ClassificationPubMed (test)
Accuracy82.74
500
Node ClassificationSquirrel (test)
Mean Accuracy46.66
234
Node ClassificationChameleon (test)
Mean Accuracy49.11
230
Node ClassificationTexas (test)
Mean Accuracy80.81
228
Graph ClassificationMUTAG (10-fold cross-validation)
Accuracy88.36
206
Node ClassificationWisconsin (test)
Mean Accuracy85.1
198
Graph ClassificationPROTEINS (10-fold cross-validation)
Accuracy76.8
197
Node ClassificationCornell (test)
Mean Accuracy74.05
188
Showing 10 of 20 rows

Other info

Follow for update