Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RepQ-ViT: Scale Reparameterization for Post-Training Quantization of Vision Transformers

About

Post-training quantization (PTQ), which only requires a tiny dataset for calibration without end-to-end retraining, is a light and practical model compression technique. Recently, several PTQ schemes for vision transformers (ViTs) have been presented; unfortunately, they typically suffer from non-trivial accuracy degradation, especially in low-bit cases. In this paper, we propose RepQ-ViT, a novel PTQ framework for ViTs based on quantization scale reparameterization, to address the above issues. RepQ-ViT decouples the quantization and inference processes, where the former employs complex quantizers and the latter employs scale-reparameterized simplified quantizers. This ensures both accurate quantization and efficient inference, which distinguishes it from existing approaches that sacrifice quantization performance to meet the target hardware. More specifically, we focus on two components with extreme distributions: post-LayerNorm activations with severe inter-channel variation and post-Softmax activations with power-law features, and initially apply channel-wise quantization and log$\sqrt{2}$ quantization, respectively. Then, we reparameterize the scales to hardware-friendly layer-wise quantization and log2 quantization for inference, with only slight accuracy or computational costs. Extensive experiments are conducted on multiple vision tasks with different model variants, proving that RepQ-ViT, without hyperparameters and expensive reconstruction procedures, can outperform existing strong baselines and encouragingly improve the accuracy of 4-bit PTQ of ViTs to a usable level. Code is available at https://github.com/zkkli/RepQ-ViT.

Zhikai Li, Junrui Xiao, Lianwei Yang, Qingyi Gu• 2022

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2454
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy84.57
1866
Image ClassificationImageNet (val)
Top-1 Acc83.6
1206
Instance SegmentationCOCO 2017 (val)--
1144
Image GenerationImageNet 256x256 (val)
FID4.51
307
Instance SegmentationCOCO
APmask44.8
279
Image GenerationImageNet (val)
FID2.55
198
Image GenerationImageNet 512x512 (val)
FID-50K59.65
184
Class-conditional Image GenerationImageNet 256x256 (test)
FID3.79
167
Object DetectionCOCO
AP (Box)51.5
144
Showing 10 of 18 rows

Other info

Follow for update