Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Making Deep Neural Networks Robust to Label Noise: a Loss Correction Approach

About

We present a theoretically grounded approach to train deep neural networks, including recurrent networks, subject to class-dependent label noise. We propose two procedures for loss correction that are agnostic to both application domain and network architecture. They simply amount to at most a matrix inversion and multiplication, provided that we know the probability of each class being corrupted into another. We further show how one can estimate these probabilities, adapting a recent technique for noise estimation to the multi-class setting, and thus providing an end-to-end framework. Extensive experiments on MNIST, IMDB, CIFAR-10, CIFAR-100 and a large scale dataset of clothing images employing a diversity of architectures --- stacking dense, convolutional, pooling, dropout, batch normalization, word embedding, LSTM and residual layers --- demonstrate the noise robustness of our proposals. Incidentally, we also prove that, when ReLU is the only non-linearity, the loss curvature is immune to class-dependent label noise.

Giorgio Patrini, Alessandro Rozza, Aditya Menon, Richard Nock, Lizhen Qu• 2016

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy73.01
3518
Image ClassificationCIFAR-10 (test)
Accuracy91.06
3381
Image ClassificationCIFAR-10 (test)
Accuracy93.12
906
Node ClassificationCora
Accuracy75.9
885
Image ClassificationMNIST (test)
Accuracy99.3
882
Node ClassificationCiteseer
Accuracy62.4
804
Node ClassificationPubmed
Accuracy71
742
Node ClassificationCora (test)
Mean Accuracy72.47
687
Image ClassificationCIFAR-100 (val)
Accuracy75.4
661
Image ClassificationCIFAR-100--
622
Showing 10 of 180 rows
...

Other info

Follow for update