Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MiniViT: Compressing Vision Transformers with Weight Multiplexing

About

Vision Transformer (ViT) models have recently drawn much attention in computer vision due to their high model capability. However, ViT models suffer from huge number of parameters, restricting their applicability on devices with limited memory. To alleviate this problem, we propose MiniViT, a new compression framework, which achieves parameter reduction in vision transformers while retaining the same performance. The central idea of MiniViT is to multiplex the weights of consecutive transformer blocks. More specifically, we make the weights shared across layers, while imposing a transformation on the weights to increase diversity. Weight distillation over self-attention is also applied to transfer knowledge from large-scale ViT models to weight-multiplexed compact models. Comprehensive experiments demonstrate the efficacy of MiniViT, showing that it can reduce the size of the pre-trained Swin-B transformer by 48\%, while achieving an increase of 1.0\% in Top-1 accuracy on ImageNet. Moreover, using a single-layer of parameters, MiniViT is able to compress DeiT-B by 9.7 times from 86M to 9M parameters, without seriously compromising the performance. Finally, we verify the transferability of MiniViT by reporting its performance on downstream benchmarks. Code and models are available at here.

Jinnian Zhang, Houwen Peng, Kan Wu, Mengchen Liu, Bin Xiao, Jianlong Fu, Lu Yuan• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100
Top-1 Accuracy91.5
622
Image ClassificationImageNet-1K
Top-1 Acc84.7
524
Image ClassificationCIFAR-10
Accuracy99.3
507
Image ClassificationStanford Cars--
477
Image ClassificationImageNet-1k (val)
Top-1 Acc85.5
287
Image ClassificationOxford-IIIT Pets
Accuracy95.5
259
Image ClassificationImageNet Real (val)
Top-1 Acc89.9
181
Image ClassificationImageNet V2 (test)
Top-1 Accuracy76.1
181
Image ClassificationOxford 102 Flowers
Top-1 Accuracy98.3
68
Showing 9 of 9 rows

Other info

Follow for update