A Greedy Algorithm for Quantizing Neural Networks
About
We propose a new computationally efficient method for quantizing the weights of pre- trained neural networks that is general enough to handle both multi-layer perceptrons and convolutional neural networks. Our method deterministically quantizes layers in an iterative fashion with no complicated re-training required. Specifically, we quantize each neuron, or hidden unit, using a greedy path-following algorithm. This simple algorithm is equivalent to running a dynamical system, which we prove is stable for quantizing a single-layer neural network (or, alternatively, for quantizing the first layer of a multi-layer network) when the training data are Gaussian. We show that under these assumptions, the quantization error decays with the width of the layer, i.e., its level of over-parametrization. We provide numerical experiments, on multi-layer networks, to illustrate the performance of our methods on MNIST and CIFAR10 data, as well as for quantizing the VGG16 network using ImageNet data.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-10 (test) | Accuracy88.88 | 3381 | |
| Language Modeling | WikiText2 | Perplexity7.2 | 1875 | |
| Zero-shot Evaluation | Downstream Tasks Zero-shot | Accuracy71.9 | 278 | |
| Zero-shot Reasoning | ARC-e, Winogrande, HellaSwag, PIQA | Normalized Avg Accuracy46 | 36 |