Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DeiT III: Revenge of the ViT

About

A Vision Transformer (ViT) is a simple neural architecture amenable to serve several computer vision tasks. It has limited built-in architectural priors, in contrast to more recent architectures that incorporate priors either about the input data or of specific tasks. Recent works show that ViTs benefit from self-supervised pre-training, in particular BerT-like pre-training like BeiT. In this paper, we revisit the supervised training of ViTs. Our procedure builds upon and simplifies a recipe introduced for training ResNet-50. It includes a new simple data-augmentation procedure with only 3 augmentations, closer to the practice in self-supervised learning. Our evaluations on Image classification (ImageNet-1k with and without pre-training on ImageNet-21k), transfer learning and semantic segmentation show that our procedure outperforms by a large margin previous fully supervised training recipes for ViT. It also reveals that the performance of our ViT trained with supervision is comparable to that of more recent architectures. Our results could serve as better baselines for recent self-supervised approaches demonstrated on ViT.

Hugo Touvron, Matthieu Cord, Herv\'e J\'egou• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Semantic segmentationADE20K (val)
mIoU54.6
2731
Object DetectionCOCO 2017 (val)--
2454
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU65.8
2040
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy85.8
1866
Image ClassificationImageNet-1k (val)
Top-1 Accuracy83.8
1453
Semantic segmentationPASCAL VOC 2012 (test)
mIoU76.1
1342
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)87.7
1155
Instance SegmentationCOCO 2017 (val)--
1144
Semantic segmentationADE20K
mIoU25.4
936
Showing 10 of 58 rows

Other info

Follow for update