Generalized Jensen-Shannon Divergence Loss for Learning with Noisy Labels
About
Prior works have found it beneficial to combine provably noise-robust loss functions e.g., mean absolute error (MAE) with standard categorical loss function e.g. cross entropy (CE) to improve their learnability. Here, we propose to use Jensen-Shannon divergence as a noise-robust loss function and show that it interestingly interpolate between CE and MAE with a controllable mixing parameter. Furthermore, we make a crucial observation that CE exhibit lower consistency around noisy data points. Based on this observation, we adopt a generalized version of the Jensen-Shannon divergence for multiple distributions to encourage consistency around data points. Using this loss function, we show state-of-the-art results on both synthetic (CIFAR), and real-world (e.g., WebVision) noise with varying noise rates.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-100 (test) | Accuracy75.71 | 3518 | |
| Image Classification | CIFAR-10 (test) | Accuracy95.33 | 3381 | |
| Commonsense Reasoning | WinoGrande | Accuracy74.4 | 1085 | |
| Mathematical Reasoning | GSM8K | Accuracy79.8 | 312 | |
| Image Classification | ILSVRC 2012 (val) | Top-1 Accuracy75.5 | 156 | |
| commonsense inference | HellaSwag | Accuracy78.9 | 91 | |
| Image Classification | ANIMAL-10N (test) | Accuracy84.2 | 83 | |
| Image Classification | WebVision 1.0 (val) | Top-1 Acc79.28 | 59 | |
| Large Language Model Evaluation | MMLU, GSM8k, HellaSwag, WinoGrande | Average Score75.3 | 58 | |
| Image Classification | Food-101N (test) | -- | 58 |