Zero Time Waste: Recycling Predictions in Early Exit Neural Networks
About
The problem of reducing processing time of large deep learning models is a fundamental challenge in many real-world applications. Early exit methods strive towards this goal by attaching additional Internal Classifiers (ICs) to intermediate layers of a neural network. ICs can quickly return predictions for easy examples and, as a result, reduce the average inference time of the whole model. However, if a particular IC does not decide to return an answer early, its predictions are discarded, with its computations effectively being wasted. To solve this issue, we introduce Zero Time Waste (ZTW), a novel approach in which each IC reuses predictions returned by its predecessors by (1) adding direct connections between ICs and (2) combining previous outputs in an ensemble-like manner. We conduct extensive experiments across various datasets and architectures to demonstrate that ZTW achieves a significantly better accuracy vs. inference time trade-off than other recently proposed early exit methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | Accuracy76.4 | 3518 | |
| Image Classification | CIFAR-10 (test) | Accuracy94.7 | 3381 | |
| Image Classification | TinyImageNet (test) | Accuracy60.5 | 366 | |
| Image Classification | ImageNet (test) | -- | 235 | |
| Retinal Disease Classification | OCT 2017 (test) | Accuracy98.5 | 24 |