Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Real-Time Explanations for Tabular Foundation Models

About

Interpretability is central for scientific machine learning, as understanding \emph{why} models make predictions enables hypothesis generation and validation. While tabular foundation models show strong performance, existing explanation methods like SHAP are computationally expensive, limiting interactive exploration. We introduce ShapPFN, a foundation model that integrates Shapley value regression directly into its architecture, producing both predictions and explanations in a single forward pass. On standard benchmarks, ShapPFN achieves competitive performance while producing high-fidelity explanations ($R^2$=0.96, cosine=0.99) over 1000\times faster than KernelSHAP (0.06s vs 610s). Our code is available at https://github.com/kunumi/ShapPFN

Luan Borges Teodoro Reis Sena, Francisco Galuppo Azevedo• 2026

Related benchmarks

TaskDatasetResultRank
Tabular ClassificationOpenML-CC18 Eval-only v2 (test)
Accuracy (analcat-auth)99.9
7
Tabular ClassificationOpenML-CC18 HPO subset v2 (test)
Banknote Performance99.3
7
SHAP value approximationBanknote (test)
R^20.965
2
SHAP value approximationblood-transf (test)
R^20.939
2
SHAP value approximationDiabetes (test)
R^20.984
2
SHAP value approximationElectricity (test)
R^20.975
2
SHAP value approximationPhoneme (test)
R^20.965
2
SHAP value approximationwilt (test)
R^2 Score0.952
2
SHAP value approximationanalcat-dmft (test)
R^20.971
2
SHAP value approximationbalance (test)
R^20.98
2
Showing 10 of 16 rows

Other info

Follow for update