Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Visformer: The Vision-friendly Transformer

About

The past year has witnessed the rapid development of applying the Transformer module to vision problems. While some researchers have demonstrated that Transformer-based models enjoy a favorable ability of fitting data, there are still growing number of evidences showing that these models suffer over-fitting especially when the training data is limited. This paper offers an empirical study by performing step-by-step operations to gradually transit a Transformer-based model to a convolution-based model. The results we obtain during the transition process deliver useful messages for improving visual recognition. Based on these observations, we propose a new architecture named Visformer, which is abbreviated from the `Vision-friendly Transformer'. With the same computational complexity, Visformer outperforms both the Transformer-based and convolution-based models in terms of ImageNet classification accuracy, and the advantage becomes more significant when the model complexity is lower or the training set is smaller. The code is available at https://github.com/danczs/Visformer.

Zhengsu Chen, Lingxi Xie, Jianwei Niu, Xuefeng Liu, Longhui Wei, Qi Tian• 2021

Related benchmarks

TaskDatasetResultRank
Instance SegmentationCOCO 2017 (val)--
1144
Image ClassificationImageNet 1k (test)
Top-1 Accuracy82.19
798
Image ClassificationImageNet-1k (val)
Top-1 Acc83
706
Image ClassificationImageNet--
429
Object DetectionCOCO 2017
AP (Box)51.6
279
Image ClassificationImageNet-1k (val)
Top-1 Accuracy83
91
Image ClassificationImageNet 1k (test)
Top-1 Accuracy82.1
55
Image ClassificationImageNet (10%)
Top-1 Acc90.06
32
Image ClassificationImageNet 1% labeled subset 1k (test)
Top-1 Acc91.6
22
Image ClassificationImageNet 10% labeled 1k (test)--
13
Showing 10 of 10 rows

Other info

Code

Follow for update