Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Supervised Learning with Swin Transformers

About

We are witnessing a modeling shift from CNN to Transformers in computer vision. In this work, we present a self-supervised learning approach called MoBY, with Vision Transformers as its backbone architecture. The approach basically has no new inventions, which is combined from MoCo v2 and BYOL and tuned to achieve reasonably high accuracy on ImageNet-1K linear evaluation: 72.8% and 75.0% top-1 accuracy using DeiT-S and Swin-T, respectively, by 300-epoch training. The performance is slightly better than recent works of MoCo v3 and DINO which adopt DeiT as the backbone, but with much lighter tricks. More importantly, the general-purpose Swin Transformer backbone enables us to also evaluate the learnt representations on downstream tasks such as object detection and semantic segmentation, in contrast to a few recent approaches built on ViT/DeiT which only report linear evaluation results on ImageNet-1K due to ViT/DeiT not tamed for these dense prediction tasks. We hope our results can facilitate more comprehensive evaluation of self-supervised learning methods designed for Transformer architectures. Our code and models are available at https://github.com/SwinTransformer/Transformer-SSL, which will be continually enriched.

Zhenda Xie, Yutong Lin, Zhuliang Yao, Zheng Zhang, Qi Dai, Yue Cao, Han Hu• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU45.81
2731
Object DetectionCOCO 2017 (val)--
2454
Image ClassificationImageNet-1k (val)
Top-1 Accuracy72.8
1453
Image ClassificationImageNet (val)
Top-1 Acc75
1206
Instance SegmentationCOCO 2017 (val)
APm0.415
1144
Person Re-IdentificationMarket 1501
mAP84
999
Semantic segmentationADE20K
mIoU44.1
936
Person Re-IdentificationMSMT17
mAP0.5
404
Instance SegmentationCOCO
APmask41.5
279
Object DetectionMS-COCO 2017 (val)--
237
Showing 10 of 18 rows

Other info

Code

Follow for update