Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach
About
We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures --- stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers --- demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | Accuracy73.01 | 3518 | |
| Image Classification | CIFAR-10 (test) | Accuracy91.06 | 3381 | |
| Image Classification | CIFAR-10 (test) | Accuracy93.12 | 906 | |
| Node Classification | Cora | Accuracy75.9 | 885 | |
| Image Classification | MNIST (test) | Accuracy99.3 | 882 | |
| Node Classification | Citeseer | Accuracy62.4 | 804 | |
| Node Classification | Pubmed | Accuracy71 | 742 | |
| Node Classification | Cora (test) | Mean Accuracy72.47 | 687 | |
| Image Classification | CIFAR-100 (val) | Accuracy75.4 | 661 | |
| Image Classification | CIFAR-100 | -- | 622 |