Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Divide and Contrast: Self-supervised Learning from Uncurated Data

About

Self-supervised learning holds promise in leveraging large amounts of unlabeled data, however much of its progress has thus far been limited to highly curated pre-training data such as ImageNet. We explore the effects of contrastive learning from larger, less-curated image datasets such as YFCC, and find there is indeed a large difference in the resulting representation quality. We hypothesize that this curation gap is due to a shift in the distribution of image classes -- which is more diverse and heavy-tailed -- resulting in less relevant negative samples to learn from. We test this hypothesis with a new approach, Divide and Contrast (DnC), which alternates between contrastive learning and clustering-based hard negative mining. When pretrained on less curated datasets, DnC greatly improves the performance of self-supervised learning on downstream tasks, while remaining competitive with the current state-of-the-art on curated datasets.

Yonglong Tian, Olivier J. Henaff, Aaron van den Oord• 2021

Related benchmarks

TaskDatasetResultRank
Semantic segmentationADE20K (val)
mIoU39.2
2731
Image ClassificationImageNet-1k (val)
Top-1 Accuracy75.8
1453
Video Object SegmentationDAVIS 2017 (val)
J mean63.1
1130
Semantic segmentationADE20K
mIoU39.2
936
Object DetectionCOCO (val)
mAP43.9
613
Action RecognitionUCF101 (test)--
307
Image ClassificationStanford Cars (test)
Accuracy75.3
306
Instance SegmentationCOCO
APmask37.2
279
ClassificationCIFAR10 (test)
Accuracy91.7
266
Image ClassificationImageNet (test)
Top-1 Acc70.7
235
Showing 10 of 37 rows

Other info

Code

Follow for update