Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Mitigating Neural Network Overconfidence with Logit Normalization

About

Detecting out-of-distribution inputs is critical for safe deployment of machine learning models in the real world. However, neural networks are known to suffer from the overconfidence issue, where they produce abnormally high confidence for both in- and out-of-distribution inputs. In this work, we show that this issue can be mitigated through Logit Normalization (LogitNorm) -- a simple fix to the cross-entropy loss -- by enforcing a constant vector norm on the logits in training. Our method is motivated by the analysis that the norm of the logit keeps increasing during training, leading to overconfident output. Our key idea behind LogitNorm is thus to decouple the influence of output's norm during network optimization. Trained with LogitNorm, neural networks produce highly distinguishable confidence scores between in- and out-of-distribution data. Extensive experiments demonstrate the superiority of LogitNorm, reducing the average FPR95 by up to 42.30% on common benchmarks.

Hongxin Wei, Renchunzi Xie, Hao Cheng, Lei Feng, Bo An, Yixuan Li• 2022

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionImageNet OOD Average 1k (test)
FPR@9528.41
137
Out-of-Distribution DetectionCIFAR-100
AUROC94.53
107
OOD DetectionCIFAR-100 standard (test)
AUROC (%)86.6
94
Out-of-Distribution DetectionCIFAR-10 (ID) vs SVHN (OOD) (test)
AUROC76.82
79
Out-of-Distribution DetectionCIFAR100 (test)
AUROC81.71
57
OOD DetectionCIFAR-10 IND iSUN OOD
AUROC76.03
42
OOD DetectionTextures (OOD) with CIFAR-10 (ID) (test)
FPR@9593.9
40
Failure DetectionCIFAR100 (test)
AURC125.6
39
Failure DetectionCIFAR100 vs. SVHN
AURC Score356.9
39
Out-of-Distribution DetectionCIFAR100
AURC235.5
39
Showing 10 of 41 rows

Other info

Follow for update