GRANDE: Gradient-Based Decision Tree Ensembles for Tabular Data
About
Despite the success of deep learning for text and image data, tree-based ensemble models are still state-of-the-art for machine learning with heterogeneous tabular data. However, there is a significant need for tabular-specific gradient-based methods due to their high flexibility. In this paper, we propose $\text{GRANDE}$, $\text{GRA}$die$\text{N}$t-Based $\text{D}$ecision Tree $\text{E}$nsembles, a novel approach for learning hard, axis-aligned decision tree ensembles using end-to-end gradient descent. GRANDE is based on a dense representation of tree ensembles, which affords to use backpropagation with a straight-through operator to jointly optimize all model parameters. Our method combines axis-aligned splits, which is a useful inductive bias for tabular data, with the flexibility of gradient-based optimization. Furthermore, we introduce an advanced instance-wise weighting that facilitates learning representations for both, simple and complex relations, within a single model. We conducted an extensive evaluation on a predefined benchmark with 19 classification datasets and demonstrate that our method outperforms existing gradient-boosting and deep learning frameworks on most datasets. The method is available under: https://github.com/s-marton/GRANDE
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Regression | CA Housing | RMSE0.481 | 45 | |
| Classification | HE | Accuracy35 | 38 | |
| Tabular Classification | NUM (L) (test) | Macro F10.958 | 18 | |
| Tabular Classification | PHI L (test) | Macro F196.9 | 9 | |
| Tabular Classification | SPE M (test) | Macro F172.5 | 9 | |
| Tabular Classification | OZO M (test) | Macro F173.5 | 9 | |
| Tabular Classification | QSA M (test) | Macro F1 Score85.4 | 9 | |
| Tabular Classification | ILP S (test) | Macro F1 (%)65.7 | 9 | |
| Tabular Classification | WDB S (test) | Macro F1 Score0.975 | 9 | |
| Tabular Classification | CYL S (test) | Macro F1-score0.819 | 9 |