Enhancing Out-of-Distribution Detection with Extended Logit Normalization
About
\noindent Out-of-distribution (OOD) detection is essential for the safe deployment of machine learning models. Extensive work has focused on devising various scoring functions for detecting OOD samples, while only a few studies focus on training neural networks using certain model calibration objectives, which often lead to a compromise in predictive accuracy and support only limited choices of scoring functions. In this work, we first identify the feature collapse phenomena in Logit Normalization (LogitNorm), then propose a novel hyperparameter-free formulation that significantly benefits a wide range of post-hoc detection methods. To be specific, we devise a feature distance-awareness loss term in addition to LogitNorm, termed $\textbf{ELogitNorm}$, which enables improved OOD detection and in-distribution (ID) confidence calibration. Extensive experiments across standard benchmarks demonstrate that our approach outperforms state-of-the-art training-time methods in OOD detection while maintaining strong ID classification accuracy. Our code is available on: https://github.com/limchaos/ElogitNorm.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| OOD Detection | SVHN (test) | AUROC0.9878 | 61 | |
| OOD Detection | CIFAR-100 OOD (test) | AUROC91.05 | 12 | |
| OOD Detection | TIN OOD (test) | AUROC93.88 | 12 | |
| OOD Detection | MNIST OOD (test) | AUROC99.54 | 12 | |
| OOD Detection | Textures OOD (test) | AUROC95.78 | 12 | |
| OOD Detection | Places365 OOD (test) | AUROC94.44 | 12 | |
| OOD Detection | Far-OOD (test) | AUROC0.9694 | 12 | |
| Out-of-Distribution Detection | ImageNet-200 (ID) vs NINCO (OOD) 1.0 (test) | AUROC83.24 | 3 | |
| Out-of-Distribution Detection | ImageNet-200 (ID) vs Near-OOD (average) 1.0 (test) | AUROC76.88 | 3 | |
| Out-of-Distribution Detection | ImageNet-200 (ID) vs iNaturalist (OOD) 1.0 (test) | AUROC96.15 | 3 |