Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

RocketStack: Level-aware Deep Recursive Ensemble Learning Architecture

About

Ensemble learning remains a cornerstone of machine learning, with stacking used to integrate predictions from multiple base learners through a meta-model. However, deep stacking remains uncommon due to feature redundancy, complexity, and computational burden. To address these limitations, RocketStack is introduced as a level-aware recursive stacking architecture explored up to ten stacking levels, extending beyond prior architectures. At level 1, base-learner predictions are fused with original features; at later levels, weaker learners are incrementally pruned using out-of-fold (OOF) scores. To curb early saturation, pruning is regularized by applying Gaussian perturbations at two noise scales to OOF scores prior to model selection for next-level stacking, alongside deterministic pruning. To control feature growth, periodic compression is applied at levels 3, 6, and 9 using Simple, Fast, Efficient (SFE) filtering, attention-based selection, and autoencoders. Across 33 datasets (23 binary, 10 multi-class), increasing accuracy with depth is confirmed by linear mixed-effects trend tests, and the best meta-model per level increasingly outperforms the best standalone ensemble. OOF-perturbed pruning is found to improve stability and late-level gains, while periodic compression is found to yield substantial runtime and dimensionality reductions with minimal accuracy drop. At the deepest level, accuracy slightly surpasses established deep tabular baselines. When hyperparameter optimization is performed on baseline models, early performance is boosted; however, untuned RocketStack closes the gap with depth and remains competitive at later levels. It achieves deep recursive stacking with sublinear computational growth and provides a modular, depth-aware foundation for scalable decision fusion as model pools and feature spaces evolve.

\c{C}a\u{g}atay Demirel• 2025

Related benchmarks

TaskDatasetResultRank
Binary ClassificationGrand-averaged Binary Datasets (test)
Accuracy97.69
51
Multi-class classificationGrand-averaged Multi-class Datasets (test)
Accuracy98.6
12
Binary Classification33 tabular datasets Binary
Accuracy88.9
4
Multi-class classification33 Tabular Datasets Multi-class
Accuracy94.82
4
Showing 4 of 4 rows

Other info

Follow for update