Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Combating noisy labels by agreement: A joint training method with co-regularization

About

Deep Learning with noisy labels is a practically challenging problem in weakly supervised learning. The state-of-the-art approaches "Decoupling" and "Co-teaching+" claim that the "disagreement" strategy is crucial for alleviating the problem of learning with noisy labels. In this paper, we start from a different perspective and propose a robust learning paradigm called JoCoR, which aims to reduce the diversity of two networks during training. Specifically, we first use two networks to make predictions on the same mini-batch data and calculate a joint loss with Co-Regularization for each training example. Then we select small-loss examples to update the parameters of both two networks simultaneously. Trained by the joint loss, these two networks would be more and more similar due to the effect of Co-Regularization. Extensive experimental results on corrupted data from benchmark datasets including MNIST, CIFAR-10, CIFAR-100 and Clothing1M demonstrate that JoCoR is superior to many state-of-the-art approaches for learning with noisy labels.

Hongxin Wei, Lei Feng, Xiangyu Chen, Bo An• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy68.48
3518
Node ClassificationCora
Accuracy76.3
1215
Node ClassificationCiteseer
Accuracy65.9
931
Node ClassificationCora (test)
Mean Accuracy71.16
861
Node ClassificationPubmed
Accuracy61.8
819
Image ClassificationFashion MNIST (test)
Accuracy92.01
592
Image ClassificationClothing1M (test)
Accuracy70.3
574
Fine-grained Image ClassificationCUB200 2011 (test)
Accuracy62.99
543
Image ClassificationSVHN (test)
Accuracy93.52
401
Fine-grained Image ClassificationStanford Cars (test)
Accuracy74.68
348
Showing 10 of 110 rows
...

Other info

Follow for update