Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Greedy Algorithm for Quantizing Neural Networks

About

We propose a new computationally efficient method for quantizing the weights of pre- trained neural networks that is general enough to handle both multi-layer perceptrons and convolutional neural networks. Our method deterministically quantizes layers in an iterative fashion with no complicated re-training required. Specifically, we quantize each neuron, or hidden unit, using a greedy path-following algorithm. This simple algorithm is equivalent to running a dynamical system, which we prove is stable for quantizing a single-layer neural network (or, alternatively, for quantizing the first layer of a multi-layer network) when the training data are Gaussian. We show that under these assumptions, the quantization error decays with the width of the layer, i.e., its level of over-parametrization. We provide numerical experiments, on multi-layer networks, to illustrate the performance of our methods on MNIST and CIFAR10 data, as well as for quantizing the VGG16 network using ImageNet data.

Eric Lybrand, Rayan Saab• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy88.88
3381
Language ModelingWikiText2
Perplexity7.2
1875
Zero-shot EvaluationDownstream Tasks Zero-shot
Accuracy71.9
278
Zero-shot ReasoningARC-e, Winogrande, HellaSwag, PIQA
Normalized Avg Accuracy46
36
Showing 4 of 4 rows

Other info

Follow for update