Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OpenMix: Exploring Outlier Samples for Misclassification Detection

About

Reliable confidence estimation for deep neural classifiers is a challenging yet fundamental requirement in high-stakes applications. Unfortunately, modern deep neural networks are often overconfident for their erroneous predictions. In this work, we exploit the easily available outlier samples, i.e., unlabeled samples coming from non-target classes, for helping detect misclassification errors. Particularly, we find that the well-known Outlier Exposure, which is powerful in detecting out-of-distribution (OOD) samples from unknown classes, does not provide any gain in identifying misclassification errors. Based on these observations, we propose a novel method called OpenMix, which incorporates open-world knowledge by learning to reject uncertain pseudo-samples generated via outlier transformation. OpenMix significantly improves confidence reliability under various scenarios, establishing a strong and unified framework for detecting both misclassified samples from known classes and OOD samples from unknown classes. The code is publicly available at https://github.com/Impression2805/OpenMix.

Fei Zhu, Zhen Cheng, Xu-Yao Zhang, Cheng-Lin Liu• 2023

Related benchmarks

TaskDatasetResultRank
OOD DetectionCIFAR-100 standard (test)
AUROC (%)84.88
94
Out-of-Distribution DetectionCIFAR100
AURC342.2
39
Failure DetectionCIFAR100 vs. SVHN
AURC Score406.8
39
Failure DetectionCIFAR100 (test)
AURC85.66
39
Misclassification DetectionCIFAR-10
AUROC94.81
28
Misclassification DetectionCIFAR-100
AURC73.84
27
Out-of-Distribution DetectionCIFAR-10 (ID) vs 6 OOD datasets (Textures, SVHN, Place365, LSUN-C, LSUN-R, iSUN) (test)
FPR@9516.86
24
Misclassification DetectionCIFAR-10-C 1.0 (test)
AUROC90.38
9
Misclassification DetectionCIFAR-100-C 1.0 (test)
AUROC84.05
9
Showing 9 of 9 rows

Other info

Code

Follow for update