To prune, or not to prune: exploring the efficacy of pruning for model compression
About
Model pruning seeks to induce sparsity in a deep neural network's various connection matrices, thereby reducing the number of nonzero-valued parameters in the model. Recent reports (Han et al., 2015; Narang et al., 2017) prune deep networks at the cost of only a marginal loss in accuracy and achieve a sizable reduction in model size. This hints at the possibility that the baseline models in these experiments are perhaps severely over-parameterized at the outset and a viable alternative for model compression might be to simply reduce the number of hidden units while maintaining the model's dense connection structure, exposing a similar trade-off in model size and accuracy. We investigate these two distinct paths for model compression within the context of energy-efficient inference in resource-constrained environments and propose a new gradual pruning technique that is simple and straightforward to apply across a variety of models/datasets with minimal tuning and can be seamlessly incorporated within the training process. We compare the accuracy of large, but pruned models (large-sparse) and their smaller, but dense (small-dense) counterparts with identical memory footprint. Across a broad range of neural network architectures (deep CNNs, stacked LSTM, and seq2seq LSTM models), we find large-sparse models to consistently outperform small-dense models and achieve up to 10x reduction in number of non-zero parameters with minimal loss in accuracy.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | ImageNet-1K 1.0 (val) | Top-1 Accuracy75.3 | 1866 | |
| Image Classification | ImageNet-1k (val) | Top-1 Accuracy75.6 | 1453 | |
| Image Classification | ImageNet (val) | Top-1 Acc73.91 | 1206 | |
| Visual Question Answering | TextVQA | -- | 1117 | |
| Image Classification | ImageNet-1k (val) | Top-1 Accuracy61.8 | 840 | |
| Image Classification | ImageNet 1k (test) | Top-1 Accuracy76.88 | 798 | |
| Image Classification | ImageNet-1k (val) | Top-1 Acc80.79 | 706 | |
| Image Classification | ImageNet-1K | Top-1 Acc69.56 | 524 | |
| Language Modeling | PTB (test) | Perplexity126 | 471 | |
| Language Modeling | C4 (val) | PPL63.43 | 392 |