Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Masked Siamese Networks for Label-Efficient Learning

About

We propose Masked Siamese Networks (MSN), a self-supervised learning framework for learning image representations. Our approach matches the representation of an image view containing randomly masked patches to the representation of the original unmasked image. This self-supervised pre-training strategy is particularly scalable when applied to Vision Transformers since only the unmasked patches are processed by the network. As a result, MSNs improve the scalability of joint-embedding architectures, while producing representations of a high semantic level that perform competitively on low-shot image classification. For instance, on ImageNet-1K, with only 5,000 annotated images, our base MSN model achieves 72.4% top-1 accuracy, and with 1% of ImageNet-1K labels, we achieve 75.7% top-1 accuracy, setting a new state-of-the-art for self-supervised learning on this benchmark. Our code is publicly available.

Mahmoud Assran, Mathilde Caron, Ishan Misra, Piotr Bojanowski, Florian Bordes, Pascal Vincent, Armand Joulin, Michael Rabbat, Nicolas Ballas• 2022

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1k (val)--
1469
Video Object SegmentationDAVIS 2017 (val)
J mean57.6
1193
Semantic segmentationADE20K
mIoU26.66
1024
Image ClassificationImageNet-1k (val)
Top-1 Accuracy62.8
844
Semantic segmentationCityscapes
mIoU25.39
658
Image ClassificationFood-101
Accuracy68.93
542
Image ClassificationOxford-IIIT Pet
Accuracy75.91
219
Semantic segmentationPascal VOC
mIoU0.6859
180
Image ClassificationiNaturalist 18
Overall Accuracy72.1
125
Image RetrievalRevisited Oxford (ROxf) (Medium)
mAP36.6
124
Showing 10 of 92 rows
...

Other info

Code

Follow for update