Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

An Empirical Study of Training Self-Supervised Vision Transformers

About

This paper does not describe a novel method. Instead, it studies a straightforward, incremental, yet must-know baseline given the recent progress in computer vision: self-supervised learning for Vision Transformers (ViT). While the training recipes for standard convolutional networks have been highly mature and robust, the recipes for ViT are yet to be built, especially in the self-supervised scenarios where training becomes more challenging. In this work, we go back to basics and investigate the effects of several fundamental components for training self-supervised ViT. We observe that instability is a major issue that degrades accuracy, and it can be hidden by apparently good results. We reveal that these results are indeed partial failure, and they can be improved when training is made more stable. We benchmark ViT results in MoCo v3 and several other self-supervised frameworks, with ablations in various aspects. We discuss the currently positive evidence as well as challenges and open questions. We hope that this work will provide useful data points and experience for future research.

Xinlei Chen, Saining Xie, Kaiming He• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy91.2
3518
Image ClassificationCIFAR-10 (test)
Accuracy99.1
3381
Semantic segmentationADE20K (val)
mIoU49.1
2731
Object DetectionCOCO 2017 (val)
AP47.9
2454
Semantic segmentationPASCAL VOC 2012 (val)
Mean IoU37.2
2040
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy84.1
1866
Image ClassificationImageNet-1k (val)
Top-1 Accuracy76.5
1453
Semantic segmentationPASCAL VOC 2012 (test)
mIoU74.5
1342
Person Re-IdentificationMarket1501 (test)
Rank-1 Accuracy92.1
1264
Image ClassificationImageNet (val)
Top-1 Acc76.7
1206
Showing 10 of 242 rows
...

Other info

Code

Follow for update