Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improving Calibration for Long-Tailed Recognition

About

Deep neural networks may perform poorly when training datasets are heavily class-imbalanced. Recently, two-stage methods decouple representation learning and classifier learning to improve performance. But there is still the vital issue of miscalibration. To address it, we design two methods to improve calibration and performance in such scenarios. Motivated by the fact that predicted probability distributions of classes are highly related to the numbers of class instances, we propose label-aware smoothing to deal with different degrees of over-confidence for classes and improve classifier learning. For dataset bias between these two stages due to different samplers, we further propose shifted batch normalization in the decoupling framework. Our proposed methods set new records on multiple popular long-tailed recognition benchmark datasets, including CIFAR-10-LT, CIFAR-100-LT, ImageNet-LT, Places-LT, and iNaturalist 2018. Code will be available at https://github.com/Jia-Research-Lab/MiSLAS.

Zhisheng Zhong, Jiequan Cui, Shu Liu, Jiaya Jia• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet LT
Top-1 Accuracy53.4
251
Long-Tailed Image ClassificationImageNet-LT (test)
Top-1 Acc (Overall)53.7
220
Image ClassificationCIFAR-10 long-tailed (test)
Top-1 Acc89.8
201
Image ClassificationiNaturalist 2018 (test)
Top-1 Accuracy71.6
192
Image ClassificationCIFAR-10-LT (test)--
185
Image ClassificationImageNet-LT (test)
Top-1 Acc (All)52.7
159
Image ClassificationCIFAR100 long-tailed (test)
Accuracy62.3
155
Image ClassificationPlaces-LT (test)
Accuracy (Medium)43.3
128
Image ClassificationCIFAR-100-LT Imbalance Ratio 100
Top-1 Acc0.4868
88
Image ClassificationCIFAR-100-LT Imbalance Ratio 10
Top-1 Acc64.18
83
Showing 10 of 99 rows
...

Other info

Code

Follow for update