Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Parametric Contrastive Learning

About

In this paper, we propose Parametric Contrastive Learning (PaCo) to tackle long-tailed recognition. Based on theoretical analysis, we observe supervised contrastive loss tends to bias on high-frequency classes and thus increases the difficulty of imbalanced learning. We introduce a set of parametric class-wise learnable centers to rebalance from an optimization perspective. Further, we analyze our PaCo loss under a balanced setting. Our analysis demonstrates that PaCo can adaptively enhance the intensity of pushing samples of the same class close as more samples are pulled together with their corresponding centers and benefit hard example learning. Experiments on long-tailed CIFAR, ImageNet, Places, and iNaturalist 2018 manifest the new state-of-the-art for long-tailed recognition. On full ImageNet, models trained with PaCo loss surpass supervised contrastive learning across various ResNet backbones, e.g., our ResNet-200 achieves 81.8% top-1 accuracy. Our code is available at https://github.com/dvlab-research/Parametric-Contrastive-Learning.

Jiequan Cui, Zhisheng Zhong, Shu Liu, Bei Yu, Jiaya Jia• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationiNaturalist 2018
Top-1 Accuracy73.2
287
Image ClassificationCUB-200 2011
Accuracy89.2
257
Image ClassificationImageNet LT
Top-1 Accuracy58.2
251
Long-Tailed Image ClassificationImageNet-LT (test)
Top-1 Acc (Overall)60
220
Image ClassificationiNaturalist 2018 (test)
Top-1 Accuracy73.2
192
Image ClassificationImageNet-LT (test)
Top-1 Acc (All)60
159
Image ClassificationStanford Dogs
Accuracy92.7
130
Image ClassificationPlaces-LT (test)
Accuracy (Medium)47.9
128
Long-tailed Visual RecognitionImageNet LT
Overall Accuracy60
89
Image ClassificationCIFAR-100-LT Imbalance Ratio 100
Top-1 Acc0.52
88
Showing 10 of 70 rows

Other info

Code

Follow for update