Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generalized Cross Entropy Loss for Training Deep Neural Networks with Noisy Labels

About

Deep neural networks (DNNs) have achieved tremendous success in a variety of applications across many disciplines. Yet, their superior performance comes with the expensive cost of requiring correctly annotated large-scale datasets. Moreover, due to DNNs' rich capacity, errors in training labels can hamper performance. To combat this problem, mean absolute error (MAE) has recently been proposed as a noise-robust alternative to the commonly-used categorical cross entropy (CCE) loss. However, as we show in this paper, MAE can perform poorly with DNNs and challenging datasets. Here, we present a theoretically grounded set of noise-robust loss functions that can be seen as a generalization of MAE and CCE. Proposed loss functions can be readily applied with any existing DNN architecture and algorithm, while yielding good performance in a wide range of noisy label scenarios. We report results from experiments conducted with CIFAR-10, CIFAR-100 and FASHION-MNIST datasets and synthetically generated noisy labels.

Zhilu Zhang, Mert R. Sabuncu• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy72.27
3518
Image ClassificationCIFAR-10 (test)
Accuracy90.97
3381
Image ClassificationImageNet (val)
Top-1 Acc60.52
1206
Image ClassificationCIFAR-10 (test)
Accuracy93.43
906
Image ClassificationMNIST (test)
Accuracy99.27
882
Image ClassificationCIFAR-100--
622
Image ClassificationClothing1M (test)
Accuracy72.4
546
Fine-grained Image ClassificationCUB200 2011 (test)
Accuracy62.92
536
Image ClassificationCIFAR-10
Accuracy90.91
471
Image ClassificationSVHN (test)
Accuracy90.82
362
Showing 10 of 111 rows
...

Other info

Follow for update