Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Can TabPFN Compete with GNNs for Node Classification via Graph Tabularization?

About

Foundation models pretrained on large data have demonstrated remarkable zero-shot generalization capabilities across domains. Building on the success of TabPFN for tabular data and its recent extension to time series, we investigate whether graph node classification can be effectively reformulated as a tabular learning problem. We introduce TabPFN-GN, which transforms graph data into tabular features by extracting node attributes, structural properties, positional encodings, and optionally smoothed neighborhood features. This enables TabPFN to perform direct node classification without any graph-specific training or language model dependencies. Our experiments on 12 benchmark datasets reveal that TabPFN-GN achieves competitive performance with GNNs on homophilous graphs and consistently outperforms them on heterophilous graphs. These results demonstrate that principled feature engineering can bridge the gap between tabular and graph domains, providing a practical alternative to task-specific GNN training and LLM-dependent graph foundation models.

Jeongwhan Choi, Woosung Kang, Minseo Kim, Jongwoo Kim, Noseong Park• 2025

Related benchmarks

TaskDatasetResultRank
Node ClassificationCora (test)
Mean Accuracy81.98
861
Node ClassificationCiteseer (test)
Accuracy0.7214
824
Node ClassificationPubMed (test)
Accuracy82.74
546
Node ClassificationChameleon (test)
Mean Accuracy49.11
297
Node ClassificationCornell (test)
Mean Accuracy74.05
274
Node ClassificationTexas (test)
Mean Accuracy80.81
269
Node ClassificationSquirrel (test)
Mean Accuracy46.66
267
Node ClassificationWisconsin (test)
Mean Accuracy85.1
239
Node ClassificationActor (test)
Mean Accuracy0.3722
237
Graph ClassificationMUTAG (10-fold cross-validation)
Accuracy88.36
219
Showing 10 of 20 rows

Other info

Follow for update