Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Robust Data Pruning under Label Noise via Maximizing Re-labeling Accuracy

About

Data pruning, which aims to downsize a large training set into a small informative subset, is crucial for reducing the enormous computational costs of modern deep learning. Though large-scale data collections invariably contain annotation noise and numerous robust learning methods have been developed, data pruning for the noise-robust learning scenario has received little attention. With state-of-the-art Re-labeling methods that self-correct erroneous labels while training, it is challenging to identify which subset induces the most accurate re-labeling of erroneous labels in the entire training set. In this paper, we formalize the problem of data pruning with re-labeling. We first show that the likelihood of a training example being correctly re-labeled is proportional to the prediction confidence of its neighborhood in the subset. Therefore, we propose a novel data pruning algorithm, Prune4Rel, that finds a subset maximizing the total neighborhood confidence of all training examples, thereby maximizing the re-labeling accuracy and generalization performance. Extensive experiments on four real and one synthetic noisy datasets show that \algname{} outperforms the baselines with Re-labeling models by up to 9.1% as well as those with a standard model by up to 21.6%.

Dongmin Park, Seola Choi, Doyoung Kim, Hwanjun Song, Jae-Gil Lee• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10N (Worst)
Accuracy92.9
78
Image ClassificationCIFAR-100 Noisy (test)--
25
Image ClassificationImageNet-1K 20% synthetic label noise (test)
Accuracy60
16
Image ClassificationCIFAR-10N Random Noise (test)
Accuracy95.3
6
Showing 4 of 4 rows

Other info

Code

Follow for update