Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Three things everyone should know about Vision Transformers

About

After their initial success in natural language processing, transformer architectures have rapidly gained traction in computer vision, providing state-of-the-art results for tasks such as image classification, detection, segmentation, and video analysis. We offer three insights based on simple and easy to implement variants of vision transformers. (1) The residual layers of vision transformers, which are usually processed sequentially, can to some extent be processed efficiently in parallel without noticeably affecting the accuracy. (2) Fine-tuning the weights of the attention layers is sufficient to adapt vision transformers to a higher resolution and to other classification tasks. This saves compute, reduces the peak memory consumption at fine-tuning time, and allows sharing the majority of weights across tasks. (3) Adding MLP-based patch pre-processing layers improves Bert-like self-supervised training based on patch masking. We evaluate the impact of these design choices using the ImageNet-1k dataset, and confirm our findings on the ImageNet-v2 test set. Transfer performance is measured across six smaller datasets.

Hugo Touvron, Matthieu Cord, Alaaeldin El-Nouby, Jakob Verbeek, Herv\'e J\'egou• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)
Top-1 Acc84
706
Image ClassificationImageNet (val)
Accuracy88.9
300
Image ClassificationiNaturalist 2018 (test)--
192
Image ClassificationiNaturalist 2018 (val)--
116
Image ClassificationPlaces-365 (val)
Accuracy60.9
43
Domain GeneralizationPACS, VLCS, OfficeHome, TerraIncognita, DomainNet
PACS Accuracy93.8
27
Medical DiagnosisCOVID19-CT
F1 Score81.6
16
Image ClassificationiNaturalist 2021 (val)
Accuracy81.9
15
Medical Diagnosis ClassificationChaoyang
F1 Score81.1
14
Medical Diagnosis ClassificationOCT
F1 Score (%)95.8
12
Showing 10 of 13 rows

Other info

Follow for update