Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How does Disagreement Help Generalization against Label Corruption?

About

Learning with noisy labels is one of the hottest problems in weakly-supervised learning. Based on memorization effects of deep neural networks, training on small-loss instances becomes very promising for handling noisy labels. This fosters the state-of-the-art approach "Co-teaching" that cross-trains two deep neural networks using the small-loss trick. However, with the increase of epochs, two networks converge to a consensus and Co-teaching reduces to the self-training MentorNet. To tackle this issue, we propose a robust learning paradigm called Co-teaching+, which bridges the "Update by Disagreement" strategy with the original Co-teaching. First, two networks feed forward and predict all data, but keep prediction disagreement data only. Then, among such disagreement data, each network selects its small-loss data, but back propagates the small-loss data from its peer network and updates its own parameters. Empirical results on benchmark datasets demonstrate that Co-teaching+ is much superior to many state-of-the-art methods in the robustness of trained models.

Xingrui Yu, Bo Han, Jiangchao Yao, Gang Niu, Ivor W. Tsang, Masashi Sugiyama• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy68.67
3518
Image ClassificationCIFAR-10 (test)
Accuracy89.8
3381
Image ClassificationCIFAR-10 (test)
Accuracy78.72
906
Image ClassificationCIFAR-100--
622
Image ClassificationClothing1M (test)
Accuracy59.32
546
Image ClassificationCIFAR-10
Accuracy89.5
471
Image ClassificationSVHN (test)
Accuracy92.64
362
Image RetrievalCUB--
87
Image ClassificationCIFAR-10N (Worst)
Accuracy83.83
78
Image ClassificationCIFAR-100 Symmetric Noise (test)
Accuracy65.6
76
Showing 10 of 96 rows
...

Other info

Follow for update