Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

QDrop: Randomly Dropping Quantization for Extremely Low-bit Post-Training Quantization

About

Recently, post-training quantization (PTQ) has driven much attention to produce efficient neural networks without long-time retraining. Despite its low cost, current PTQ works tend to fail under the extremely low-bit setting. In this study, we pioneeringly confirm that properly incorporating activation quantization into the PTQ reconstruction benefits the final accuracy. To deeply understand the inherent reason, a theoretical framework is established, indicating that the flatness of the optimized low-bit model on calibration and test data is crucial. Based on the conclusion, a simple yet effective approach dubbed as QDROP is proposed, which randomly drops the quantization of activations during PTQ. Extensive experiments on various tasks including computer vision (image classification, object detection) and natural language processing (text classification and question answering) prove its superiority. With QDROP, the limit of PTQ is pushed to the 2-bit activation for the first time and the accuracy boost can be up to 51.49%. Without bells and whistles, QDROP establishes a new state of the art for PTQ. Our code is available at https://github.com/wimh966/QDrop and has been integrated into MQBench (https://github.com/ModelTC/MQBench)

Xiuying Wei, Ruihao Gong, Yuhang Li, Xianglong Liu, Fengwei Yu• 2022

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU33.58
2888
Instance SegmentationCOCO 2017 (val)--
1201
Oriented Object DetectionDOTA v1.0 (test)--
378
Image ClassificationImageNet (val)--
300
Instance SegmentationCOCO
mAP47.9
144
Video Object SegmentationSA-V (val)
J&F Score63.82
114
Video Object SegmentationSA-V (test)
J&F66.5
110
Camera pose estimationCO3D v2 (test)
AUC@3088.8
54
Video segmentationDAVIS
J&F Score86.18
41
Instance SegmentationCOCO
APm42.5
32
Showing 10 of 12 rows

Other info

Follow for update