Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Combining Self-Supervised and Supervised Learning with Noisy Labels

About

Since convolutional neural networks (CNNs) can easily overfit noisy labels, which are ubiquitous in visual classification tasks, it has been a great challenge to train CNNs against them robustly. Various methods have been proposed for this challenge. However, none of them pay attention to the difference between representation and classifier learning of CNNs. Thus, inspired by the observation that classifier is more robust to noisy labels while representation is much more fragile, and by the recent advances of self-supervised representation learning (SSRL) technologies, we design a new method, i.e., CS$^3$NL, to obtain representation by SSRL without labels and train the classifier directly with noisy labels. Extensive experiments are performed on both synthetic and real benchmark datasets. Results demonstrate that the proposed method can beat the state-of-the-art ones by a large margin, especially under a high noisy level.

Yongqi Zhang, Hui Zhang, Quanming Yao, Jun Wan• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy76.9
3518
Image ClassificationCIFAR-10 (test)
Accuracy95.9
3381
Image ClassificationClothing1M (test)--
546
Image ClassificationCIFAR-10 Symmetric Noise (test)--
64
Image ClassificationCIFAR-10
Accuracy (Noise 20%)95.8
39
Image ClassificationCIFAR-100
Accuracy (20% Symmetric Noise)76.7
33
Image ClassificationCIFAR-10 Asymmetric Noise (test)
Accuracy (40% Noise)92.4
33
Image ClassificationCIFAR-10 40% asymmetric noise
Accuracy92.3
27
Image ClassificationCIFAR-100 symmetric label noise (test)
Accuracy (20% Noise)76.9
24
Image ClassificationCIFAR-10 20% asymmetric noise
Accuracy95
13
Showing 10 of 12 rows

Other info

Follow for update