Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Provably Robust Detection of Out-of-distribution Data (almost) for free

About

The application of machine learning in safety-critical systems requires a reliable assessment of uncertainty. However, deep neural networks are known to produce highly overconfident predictions on out-of-distribution (OOD) data. Even if trained to be non-confident on OOD data, one can still adversarially manipulate OOD data so that the classifier again assigns high confidence to the manipulated samples. We show that two previously published defenses can be broken by better adapted attacks, highlighting the importance of robustness guarantees around OOD data. Since the existing method for this task is hard to train and significantly limits accuracy, we construct a classifier that can simultaneously achieve provably adversarially robust OOD detection and high clean accuracy. Moreover, by slightly modifying the classifier's architecture our method provably avoids the asymptotic overconfidence problem of standard neural networks. We provide code for all our experiments.

Alexander Meinke, Julian Bitterwolf, Matthias Hein• 2021

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionCIFAR-100 In-distribution vs Smooth (OOD)
AUC98.9
22
Out-of-Distribution DetectionUniform Noise (test)
AuROC99.8
15
Out-of-Distribution DetectionCIFAR-10
AUC (LSUN)99.2
8
Out-of-Distribution DetectionCIFAR-10 (In-distribution) vs SVHN (OOD)
AUC98.3
8
Out-of-Distribution DetectionCIFAR-10 In-distribution vs LSUN_CR OOD
AUC100
8
Out-of-Distribution DetectionCIFAR-10 (In-distribution) vs Smooth (OOD)
AUC99.9
8
Out-of-Distribution DetectionCIFAR-10 ID CIFAR-100 OOD
AUC89.8
8
Out-of-Distribution DetectionCIFAR-100
AUC (LSUN)94.8
6
Out-of-Distribution DetectionCIFAR-100 (In-distribution) vs LSUN_CR (OOD)
AUC100
6
Out-of-Distribution DetectionCIFAR-100 (In-distribution) vs SVHN (OOD)
AUC0.915
6
Showing 10 of 18 rows

Other info

Code

Follow for update