Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Data-Free Quantization Through Weight Equalization and Bias Correction

About

We introduce a data-free quantization method for deep neural networks that does not require fine-tuning or hyperparameter selection. It achieves near-original model performance on common computer vision architectures and tasks. 8-bit fixed-point quantization is essential for efficient inference on modern deep learning hardware. However, quantizing models to run in 8-bit is a non-trivial task, frequently leading to either significant performance reduction or engineering time spent on training a network to be amenable to quantization. Our approach relies on equalizing the weight ranges in the network by making use of a scale-equivariance property of activation functions. In addition the method corrects biases in the error that are introduced during quantization. This improves quantization accuracy performance, and can be applied to many common computer vision architectures with a straight forward API call. For common architectures, such as the MobileNet family, we achieve state-of-the-art quantized model performance. We further show that the method also extends to other computer vision architectures and tasks such as semantic segmentation and object detection.

Markus Nagel, Mart van Baalen, Tijmen Blankevoort, Max Welling• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Accuracy71.2
1453
Image ClassificationImageNet (val)
Top-1 Acc73.03
1206
Image ClassificationCIFAR100 (test)
Top-1 Accuracy59.42
377
Semantic segmentationCamVid
mIoU51.02
61
SegmentationCityscapes
mIoU57.34
30
Object DetectionVOC 2012 (test)
mAP69.16
25
Image ClassificationImageNet 34 (val)
Feature Accuracy0.7172
13
Showing 7 of 7 rows

Other info

Follow for update