Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Peer Loss Functions: Learning from Noisy Labels without Knowing Noise Rates

About

Learning with noisy labels is a common challenge in supervised learning. Existing approaches often require practitioners to specify noise rates, i.e., a set of parameters controlling the severity of label noises in the problem, and the specifications are either assumed to be given or estimated using additional steps. In this work, we introduce a new family of loss functions that we name as peer loss functions, which enables learning from noisy labels and does not require a priori specification of the noise rates. Peer loss functions work within the standard empirical risk minimization (ERM) framework. We show that, under mild conditions, performing ERM with peer loss functions on the noisy dataset leads to the optimal or a near-optimal classifier as if performing ERM over the clean training data, which we do not have access to. We pair our results with an extensive set of experiments. Peer loss provides a way to simplify model development when facing potentially noisy training labels, and can be promoted as a robust candidate loss function in such situations.

Yang Liu, Hongyi Guo• 2019

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy65.64
3518
Image ClassificationClothing1M (test)
Accuracy72.6
574
Image ClassificationSVHN (test)
Accuracy92.59
401
Image ClassificationMNIST
Accuracy99.25
398
Image ClassificationCIFAR-100
Accuracy70.43
302
Image ClassificationFashion MNIST
Accuracy89.78
240
Image ClassificationCIFAR-10N (Worst)
Accuracy82.53
83
Image ClassificationCIFAR-10N (Aggregate)
Accuracy90.75
78
Image ClassificationCIFAR-100 Symmetric Noise (test)
Accuracy62.16
76
Image ClassificationCIFAR-10 Symmetric Noise (test)
Test Accuracy (Overall)90.21
64
Showing 10 of 59 rows

Other info

Code

Follow for update