Joint Optimization Framework for Learning with Noisy Labels
About
Deep neural networks (DNNs) trained on large-scale datasets have exhibited significant performance in image classification. Many large-scale datasets are collected from websites, however they tend to contain inaccurate labels that are termed as noisy labels. Training on such noisy labeled datasets causes performance degradation because DNNs easily overfit to noisy labels. To overcome this problem, we propose a joint optimization framework of learning DNN parameters and estimating true labels. Our framework can correct labels during training by alternating update of network parameters and labels. We conduct experiments on the noisy CIFAR-10 datasets and the Clothing1M dataset. The results indicate that our approach significantly outperforms other state-of-the-art methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | Accuracy72.94 | 3518 | |
| Image Classification | CIFAR-10 (test) | Accuracy88.9 | 3381 | |
| Image Classification | CIFAR-10 (test) | Accuracy93.5 | 906 | |
| Image Classification | Clothing1M (test) | Accuracy72.23 | 546 | |
| Image Classification | ImageNet (val) | Top-1 Accuracy59.5 | 354 | |
| Whole Slide Image classification | CAMELYON16 (test) | AUC0.9894 | 127 | |
| Image Classification | Food-101 (test) | Top-1 Acc81.5 | 89 | |
| Image Classification | CIFAR-10 standard (test) | Accuracy88.37 | 68 | |
| Image Classification | CIFAR-10 Symmetric Noise (test) | Test Accuracy (Overall)93.6 | 64 | |
| Image Classification | CIFAR-100 non-IID (test) | Test Accuracy (Avg Best)59.84 | 62 |