Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

What Makes for Good Views for Contrastive Learning?

About

Contrastive learning between multiple views of the data has recently achieved state of the art performance in the field of self-supervised representation learning. Despite its success, the influence of different view choices has been less studied. In this paper, we use theoretical and empirical analysis to better understand the importance of view selection, and argue that we should reduce the mutual information (MI) between views while keeping task-relevant information intact. To verify this hypothesis, we devise unsupervised and semi-supervised frameworks that learn effective views by aiming to reduce their MI. We also consider data augmentation as a way to reduce MI, and show that increasing data augmentation indeed leads to decreasing MI and improves downstream classification accuracy. As a by-product, we achieve a new state-of-the-art accuracy on unsupervised pre-training for ImageNet classification ($73\%$ top-1 linear readout with a ResNet-50). In addition, transferring our models to PASCAL VOC object detection and COCO instance segmentation consistently outperforms supervised pre-training. Code:http://github.com/HobbitLong/PyContrast

Yonglong Tian, Chen Sun, Ben Poole, Dilip Krishnan, Cordelia Schmid, Phillip Isola• 2020

Related benchmarks

TaskDatasetResultRank
Object DetectionCOCO 2017 (val)--
2643
Image ClassificationImageNet-1k (val)
Top-1 Accuracy73
1469
Image ClassificationImageNet (val)
Top-1 Acc75.2
1206
Instance SegmentationCOCO 2017 (val)--
1201
ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy (%)70.1
1163
Image ClassificationImageNet 1k (test)
Top-1 Accuracy73
848
Object DetectionPASCAL VOC 2007 (test)
mAP57.6
844
Image ClassificationImageNet-1k (val)
Top-1 Accuracy73
844
Image ClassificationImageNet-1k (val)
Top-1 Acc73
706
Image ClassificationStanford Cars
Accuracy49.6
635
Showing 10 of 63 rows

Other info

Code

Follow for update