Real-Time Explanations for Tabular Foundation Models
About
Interpretability is central for scientific machine learning, as understanding \emph{why} models make predictions enables hypothesis generation and validation. While tabular foundation models show strong performance, existing explanation methods like SHAP are computationally expensive, limiting interactive exploration. We introduce ShapPFN, a foundation model that integrates Shapley value regression directly into its architecture, producing both predictions and explanations in a single forward pass. On standard benchmarks, ShapPFN achieves competitive performance while producing high-fidelity explanations ($R^2$=0.96, cosine=0.99) over 1000\times faster than KernelSHAP (0.06s vs 610s). Our code is available at https://github.com/kunumi/ShapPFN
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Tabular Classification | OpenML-CC18 Eval-only v2 (test) | Accuracy (analcat-auth)99.9 | 7 | |
| Tabular Classification | OpenML-CC18 HPO subset v2 (test) | Banknote Performance99.3 | 7 | |
| SHAP value approximation | Banknote (test) | R^20.965 | 2 | |
| SHAP value approximation | blood-transf (test) | R^20.939 | 2 | |
| SHAP value approximation | Diabetes (test) | R^20.984 | 2 | |
| SHAP value approximation | Electricity (test) | R^20.975 | 2 | |
| SHAP value approximation | Phoneme (test) | R^20.965 | 2 | |
| SHAP value approximation | wilt (test) | R^2 Score0.952 | 2 | |
| SHAP value approximation | analcat-dmft (test) | R^20.971 | 2 | |
| SHAP value approximation | balance (test) | R^20.98 | 2 |