Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Enhancing Out-of-Distribution Detection with Extended Logit Normalization

About

\noindent Out-of-distribution (OOD) detection is essential for the safe deployment of machine learning models. Extensive work has focused on devising various scoring functions for detecting OOD samples, while only a few studies focus on training neural networks using certain model calibration objectives, which often lead to a compromise in predictive accuracy and support only limited choices of scoring functions. In this work, we first identify the feature collapse phenomena in Logit Normalization (LogitNorm), then propose a novel hyperparameter-free formulation that significantly benefits a wide range of post-hoc detection methods. To be specific, we devise a feature distance-awareness loss term in addition to LogitNorm, termed $\textbf{ELogitNorm}$, which enables improved OOD detection and in-distribution (ID) confidence calibration. Extensive experiments across standard benchmarks demonstrate that our approach outperforms state-of-the-art training-time methods in OOD detection while maintaining strong ID classification accuracy. Our code is available on: https://github.com/limchaos/ElogitNorm.

Yifan Ding, Xixi Liu, Jonas Unger, Gabriel Eilertsen• 2025

Related benchmarks

TaskDatasetResultRank
OOD DetectionSVHN (test)
AUROC0.9878
84
OOD DetectionPlaces365 OOD (test)
AUROC94.44
29
OOD DetectionCIFAR-100 OOD (test)
AUROC91.05
22
OOD DetectionTIN OOD (test)
AUROC93.88
12
OOD DetectionMNIST OOD (test)
AUROC99.54
12
OOD DetectionTextures OOD (test)
AUROC95.78
12
OOD DetectionFar-OOD (test)
AUROC0.9694
12
Out-of-Distribution DetectionImageNet-200 (ID) vs NINCO (OOD) 1.0 (test)
AUROC83.24
3
Out-of-Distribution DetectionImageNet-200 (ID) vs Near-OOD (average) 1.0 (test)
AUROC76.88
3
Out-of-Distribution DetectionImageNet-200 (ID) vs iNaturalist (OOD) 1.0 (test)
AUROC96.15
3
Showing 10 of 14 rows

Other info

Follow for update