Active In-Context Learning for Tabular Foundation Models
About
Active learning (AL) reduces labeling cost by querying informative samples, but in tabular settings its cold-start gains are often limited because uncertainty estimates are unreliable when models are trained on very few labels. Tabular foundation models such as TabPFN provide calibrated probabilistic predictions via in-context learning (ICL), i.e., without task-specific weight updates, enabling an AL regime in which the labeled context - rather than parameters - is iteratively optimized. We formalize Tabular Active In-Context Learning (Tab-AICL) and instantiate it with four acquisition rules: uncertainty (TabPFN-Margin), diversity (TabPFN-Coreset), an uncertainty-diversity hybrid (TabPFN-Hybrid), and a scalable two-stage method (TabPFN-Proxy-Hybrid) that shortlists candidates using a lightweight linear proxy before TabPFN-based selection. Across 20 classification benchmarks, Tab-AICL improves cold-start sample efficiency over retrained gradient-boosting baselines (CatBoost-Margin and XGBoost-Margin), measured by normalized AULC up to 100 labeled samples.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Classification | Adult | ROCAUC0.841 | 40 | |
| Classification | bank-marketing | ROC AUC0.837 | 19 | |
| Classification | vehicle | Cohen's Kappa0.989 | 16 | |
| Classification | blood-transfusion | AUROC72.1 | 16 | |
| Classification | Covertype | Cohen's Kappa0.471 | 16 | |
| Classification | phoneme | Cohen's Kappa0.607 | 16 | |
| Classification | bank-marketing | Cohen's Kappa0.308 | 16 | |
| Classification | tic-tac-toe | ROC-AUC68.1 | 15 | |
| Classification | vehicle | ROC AUC100 | 14 | |
| Classification | Balance Scale | ROC AUC0.998 | 14 |