Optimal Classification Trees for Continuous Feature Data Using Dynamic Programming with Branch-and-Bound
About
Computing an optimal classification tree that provably maximizes training performance within a given size limit, is NP-hard, and in practice, most state-of-the-art methods do not scale beyond computing optimal trees of depth three. Therefore, most methods rely on a coarse binarization of continuous features to maintain scalability. We propose a novel algorithm that optimizes trees directly on the continuous feature data using dynamic programming with branch-and-bound. We develop new pruning techniques that eliminate many sub-optimal splits in the search when similar to previously computed splits and we provide an efficient subroutine for computing optimal depth-two trees. Our experiments demonstrate that these techniques improve runtime by one or more orders of magnitude over state-of-the-art optimal methods and improve test accuracy by 5% over greedy heuristics.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Classification | dry-bean (test) | Accuracy58.2 | 39 | |
| Decision Tree Induction | UCI Machine Learning Repository 16 datasets (Average) | Average Primal Integral93.8 | 28 | |
| Decision Tree Learning | UCI Machine Learning Repository Average of 16 datasets (train test) | Average Primal Integral93.8 | 28 | |
| Classification | Raisin UCI (test) | Accuracy82.9 | 13 | |
| Classification | segment UCI (test) | Accuracy96.4 | 12 | |
| Classification | skin UCI (test) | Accuracy99.4 | 11 | |
| Classification | UCI Htru2 (test) | Accuracy90.6 | 9 | |
| Classification | occupancy UCI (test) | Accuracy99.2 | 6 | |
| Classification | bank UCI (test) | Accuracy98.8 | 6 | |
| Classification | rice UCI (test) | Accuracy91.7 | 6 |