Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

TabM: Advancing Tabular Deep Learning with Parameter-Efficient Ensembling

About

Deep learning architectures for supervised learning on tabular data range from simple multilayer perceptrons (MLP) to sophisticated Transformers and retrieval-augmented methods. This study highlights a major, yet so far overlooked opportunity for designing substantially better MLP-based tabular architectures. Namely, our new model TabM relies on efficient ensembling, where one TabM efficiently imitates an ensemble of MLPs and produces multiple predictions per object. Compared to a traditional deep ensemble, in TabM, the underlying implicit MLPs are trained simultaneously, and (by default) share most of their parameters, which results in significantly better performance and efficiency. Using TabM as a new baseline, we perform a large-scale evaluation of tabular DL architectures on public benchmarks in terms of both task performance and efficiency, which renders the landscape of tabular DL in a new light. Generally, we show that MLPs, including TabM, form a line of stronger and more practical models compared to attention- and retrieval-based architectures. In particular, we find that TabM demonstrates the best performance among tabular DL models. Then, we conduct an empirical analysis on the ensemble-like nature of TabM. We observe that the multiple predictions of TabM are weak individually, but powerful collectively. Overall, our work brings an impactful technique to tabular DL and advances the performance-efficiency trade-off with TabM -- a simple and powerful baseline for researchers and practitioners.

Yury Gorishniy, Akim Kotelnikov, Artem Babenko• 2024

Related benchmarks

TaskDatasetResultRank
ClassificationCO
Accuracy0.973
39
Fraud DetectionPAYSIM
F1 Score92
29
Fraud Detectionccfraud
F1 Score0.66
15
Fraud DetectionIEEE-CIS
F182
15
Fraud DetectionCCF
F1 Score85
15
ClassificationAD
Accuracy85.7
8
ClassificationCH
Accuracy86
7
Showing 7 of 7 rows

Other info

Follow for update