Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels

About

Prior works have found it beneficial to combine provably noise-robust loss functions e.g., mean absolute error (MAE) with standard categorical loss function e.g. cross entropy (CE) to improve their learnability. Here, we propose to use Jensen-Shannon divergence as a noise-robust loss function and show that it interestingly interpolate between CE and MAE with a controllable mixing parameter. Furthermore, we make a crucial observation that CE exhibit lower consistency around noisy data points. Based on this observation, we adopt a generalized version of the Jensen-Shannon divergence for multiple distributions to encourage consistency around data points. Using this loss function, we show state-of-the-art results on both synthetic (CIFAR), and real-world (e.g., WebVision) noise with varying noise rates.

Erik Englesson, Hossein Azizpour• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-100 (test)
Accuracy75.71
3518
Image ClassificationCIFAR-10 (test)
Accuracy95.33
3381
Commonsense ReasoningWinoGrande
Accuracy74.4
1085
Mathematical ReasoningGSM8K
Accuracy79.8
312
Image ClassificationILSVRC 2012 (val)
Top-1 Accuracy75.5
156
commonsense inferenceHellaSwag
Accuracy78.9
91
Image ClassificationANIMAL-10N (test)
Accuracy84.2
83
Image ClassificationWebVision 1.0 (val)
Top-1 Acc79.28
59
Large Language Model EvaluationMMLU, GSM8k, HellaSwag, WinoGrande
Average Score75.3
58
Image ClassificationFood-101N (test)--
58
Showing 10 of 26 rows

Other info

Code

Follow for update