Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning with Confident Examples: Rank Pruning for Robust Classification with Noisy Labels

About

Noisy PN learning is the problem of binary classification when training examples may be mislabeled (flipped) uniformly with noise rate rho1 for positive examples and rho0 for negative examples. We propose Rank Pruning (RP) to solve noisy PN learning and the open problem of estimating the noise rates, i.e. the fraction of wrong positive and negative labels. Unlike prior solutions, RP is time-efficient and general, requiring O(T) for any unrestricted choice of probabilistic classifier with T fitting time. We prove RP has consistent noise estimation and equivalent expected risk as learning with uncorrupted labels in ideal conditions, and derive closed-form solutions when conditions are non-ideal. RP achieves state-of-the-art noise estimation and F1, error, and AUC-PR for both MNIST and CIFAR datasets, regardless of the amount of noise and performs similarly impressively when a large portion of training examples are noise drawn from a third distribution. To highlight, RP with a CNN classifier can predict if an MNIST digit is a "one"or "not" with only 0.25% error, and 0.46 error across all digits, even when 50% of positive examples are mislabeled and 50% of observed positive labels are mislabeled negative examples.

Curtis G. Northcutt, Tailin Wu, Isaac L. Chuang• 2017

Related benchmarks

TaskDatasetResultRank
Positive-Unlabeled ClassificationCIFAR-10 (test)
Accuracy88.74
19
Positive-Unlabeled LearningSVHN (test)
Accuracy0.8173
15
Positive-Unlabeled LearningSTL-10 (test)
Accuracy92.88
14
Positive-Unlabeled ClassificationAlzheimer dataset
F1 Score62.1
11
Positive-Unlabeled LearningADNI (test)
Accuracy0.6203
6
Showing 5 of 5 rows

Other info

Follow for update