Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Emerging Properties in Self-Supervised Vision Transformers

About

In this paper, we question if self-supervised learning provides new properties to Vision Transformer (ViT) that stand out compared to convolutional networks (convnets). Beyond the fact that adapting self-supervised methods to this architecture works particularly well, we make the following observations: first, self-supervised ViT features contain explicit information about the semantic segmentation of an image, which does not emerge as clearly with supervised ViTs, nor with convnets. Second, these features are also excellent k-NN classifiers, reaching 78.3% top-1 on ImageNet with a small ViT. Our study also underlines the importance of momentum encoder, multi-crop training, and the use of small patches with ViTs. We implement our findings into a simple self-supervised method, called DINO, which we interpret as a form of self-distillation with no labels. We show the synergy between DINO and ViTs by achieving 80.1% top-1 on ImageNet in linear evaluation with ViT-Base.

Mathilde Caron, Hugo Touvron, Ishan Misra, Herv\'e J\'egou, Julien Mairal, Piotr Bojanowski, Armand Joulin• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)--
3518
Semantic segmentationADE20K (val)
mIoU47.2
2731
Object DetectionCOCO 2017 (val)
AP1.8
2454
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU55.9
2040
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy83.6
1866
Image ClassificationImageNet-1k (val)
Top-1 Accuracy78.2
1453
Semantic segmentationPASCAL VOC 2012 (test)
mIoU74.1
1342
Image ClassificationImageNet (val)
Top-1 Acc77
1206
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)82.8
1155
Instance SegmentationCOCO 2017 (val)
APm0.434
1144
Showing 10 of 502 rows
...

Other info

Code

Follow for update