Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Post-Training Quantization for Vision Transformer

About

Recently, transformer has achieved remarkable performance on a variety of computer vision applications. Compared with mainstream convolutional neural networks, vision transformers are often of sophisticated architectures for extracting powerful feature representations, which are more difficult to be developed on mobile devices. In this paper, we present an effective post-training quantization algorithm for reducing the memory storage and computational costs of vision transformers. Basically, the quantization task can be regarded as finding the optimal low-bit quantization intervals for weights and inputs, respectively. To preserve the functionality of the attention mechanism, we introduce a ranking loss into the conventional quantization objective that aims to keep the relative order of the self-attention results after quantization. Moreover, we thoroughly analyze the relationship between quantization loss of different layers and the feature diversity, and explore a mixed-precision quantization scheme by exploiting the nuclear norm of each attention map and output feature. The effectiveness of the proposed method is verified on several benchmark models and datasets, which outperforms the state-of-the-art post-training quantization algorithms. For instance, we can obtain an 81.29\% top-1 accuracy using DeiT-B model on ImageNet dataset with about 8-bit quantization.

Zhenhua Liu, Yunhe Wang, Kai Han, Siwei Ma, Wen Gao• 2021

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)
AP41.2
2454
Object DetectionPASCAL VOC 2007 (test)
mAP57.6
821
Image ClassificationImageNet (val)--
300
Object DetectionMS-COCO 2017 (val)
mAP41.2
237
Image ClassificationImageNet-1k (val)
Top-1 Acc (DeiT-S)78.09
20
Object DetectionCOCO
Box AP40.5
9
Showing 6 of 6 rows

Other info

Follow for update