Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TabNet: Attentive Interpretable Tabular Learning

About

We propose a novel high-performance and interpretable canonical deep tabular data learning architecture, TabNet. TabNet uses sequential attention to choose which features to reason from at each decision step, enabling interpretability and more efficient learning as the learning capacity is used for the most salient features. We demonstrate that TabNet outperforms other neural network and decision tree variants on a wide range of non-performance-saturated tabular datasets and yields interpretable feature attributions plus insights into the global model behavior. Finally, for the first time to our knowledge, we demonstrate self-supervised learning for tabular data, significantly improving performance with unsupervised representation learning when unlabeled data is abundant.

Sercan O. Arik, Tomas Pfister• 2019

Related benchmarks

TaskDatasetResultRank
ClassificationHI
Accuracy0.56
45
Binary Classificationcylinder-bands (CB) (test)
AUROC0.68
40
Binary Classificationdresses-sales (DS) (test)
AUROC47.8
40
Binary Classificationincome IC 1995 (test)
AUROC0.896
39
ClassificationCO
Accuracy0.957
39
ClassificationGE
Accuracy60
37
Credit approval predictionCredit Approval dataset (test)
AUROC0.8
37
Aggregate Tabular BenchmarkingAggregate
Avg Rank11.75
33
Binary Classificationadult (AD) (test)
AUROC0.904
32
ClassificationWINE (test)
Accuracy86.73
29
Showing 10 of 123 rows
...

Other info

Follow for update