Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Progressive Ensemble Distillation: Building Ensembles for Efficient Inference

About

We study the problem of progressive ensemble distillation: Given a large, pretrained teacher model $g$, we seek to decompose the model into smaller, low-inference cost student models $f_i$, such that progressively evaluating additional models in this ensemble leads to improved predictions. The resulting ensemble allows for flexibly tuning accuracy vs. inference cost at runtime, which is useful for a number of applications in on-device inference. The method we propose, B-DISTIL , relies on an algorithmic procedure that uses function composition over intermediate activations to construct expressive ensembles with similar performance as $g$ , but with smaller student models. We demonstrate the effectiveness of B-DISTIL by decomposing pretrained models across standard image, speech, and sensor datasets. We also provide theoretical guarantees in terms of convergence and generalization.

Don Kurian Dennis, Abhishek Shetty, Anish Sevekari, Kazuhito Koishida, Virginia Smith• 2023

Related benchmarks

TaskDatasetResultRank
Early predictionDSA-19
Accuracy87.2
6
Early predictionGoogle-13
Accuracy0.9225
6
Showing 2 of 2 rows

Other info

Code

Follow for update