Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Supervised Contrastive Learning

About

Contrastive learning applied to self-supervised representation learning has seen a resurgence in recent years, leading to state of the art performance in the unsupervised training of deep image models. Modern batch contrastive approaches subsume or significantly outperform traditional contrastive losses such as triplet, max-margin and the N-pairs loss. In this work, we extend the self-supervised batch contrastive approach to the fully-supervised setting, allowing us to effectively leverage label information. Clusters of points belonging to the same class are pulled together in embedding space, while simultaneously pushing apart clusters of samples from different classes. We analyze two possible versions of the supervised contrastive (SupCon) loss, identifying the best-performing formulation of the loss. On ResNet-200, we achieve top-1 accuracy of 81.4% on the ImageNet dataset, which is 0.8% above the best number reported for this architecture. We show consistent outperformance over cross-entropy on other datasets and two ResNet variants. The loss shows benefits for robustness to natural corruptions and is more stable to hyperparameter settings such as optimizers and data augmentations. Our loss function is simple to implement, and reference TensorFlow code is released at https://t.ly/supcon.

Prannay Khosla, Piotr Teterwak, Chen Wang, Aaron Sarna, Yonglong Tian, Phillip Isola, Aaron Maschinot, Ce Liu, Dilip Krishnan• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy76.73
3518
Image ClassificationCIFAR-10 (test)
Accuracy96.07
3381
Image ClassificationImageNet
Top-1 Accuracy77.13
429
Image ClassificationSUN397--
425
Image ClassificationDTD
Accuracy74.6
419
Person Re-IdentificationMSMT17
mAP0.665
404
Image ClassificationMNIST
Accuracy99.38
395
Image ClassificationSTL-10 (test)
Accuracy43.41
357
Image ClassificationCIFAR-100--
302
Image ClassificationiNaturalist 2018
Top-1 Accuracy62.75
287
Showing 10 of 199 rows
...

Other info

Follow for update