Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates
About
Learning with noisy labels is a common challenge in supervised learning. Existing approaches often require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem, and the specifications are either assumed to be given or estimated using additional steps. In this work, we introduce a new family of loss functions that we name as peer loss functions, which enables learning from noisy labels and does not require a priori specification of the noise rates. Peer loss functions work within the standard empirical risk minimization (ERM) framework. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near-optimal classifier as if performing ERM over the clean training data, which we do not have access to. We pair our results with an extensive set of experiments. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | Accuracy65.64 | 3518 | |
| Image Classification | Clothing1M (test) | Accuracy72.6 | 546 | |
| Image Classification | SVHN (test) | Accuracy92.59 | 362 | |
| Image Classification | CIFAR-100 | Accuracy70.43 | 302 | |
| Image Classification | MNIST | Accuracy99.25 | 263 | |
| Image Classification | Fashion MNIST | Accuracy89.78 | 225 | |
| Image Classification | CIFAR-10N (Worst) | Accuracy82.53 | 78 | |
| Image Classification | CIFAR-100 Symmetric Noise (test) | Accuracy62.16 | 76 | |
| Image Classification | CIFAR-10N (Aggregate) | Accuracy90.75 | 74 | |
| Image Classification | CIFAR-10 Symmetric Noise (test) | Test Accuracy (Overall)90.21 | 64 |