Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards neural networks that provably know when they don't know

About

It has recently been shown that ReLU networks produce arbitrarily over-confident predictions far away from the training data. Thus, ReLU networks do not know when they don't know. However, this is a highly important property in safety critical applications. In the context of out-of-distribution detection (OOD) there have been a number of proposals to mitigate this problem but none of them are able to make any mathematical guarantees. In this paper we propose a new approach to OOD which overcomes both problems. Our approach can be used with ReLU networks and provides provably low confidence predictions far away from the training data as well as the first certificates for low confidence predictions in a neighborhood of an out-distribution point. In the experiments we show that state-of-the-art methods fail in this worst-case setting whereas our model can guarantee its performance while retaining state-of-the-art OOD performance.

Alexander Meinke, Matthias Hein• 2019

Related benchmarks

TaskDatasetResultRank
OOD DetectionCIFAR-100 standard (test)
AUROC (%)95.02
94
OOD DetectionCIFAR-10 (test)
AUROC98.41
40
Out-of-Distribution DetectionMNIST--
13
Out-of-Distribution DetectionFMNIST--
13
Confidence calibrationCIFAR-10 ID (test)
ECE2.91
9
Confidence calibrationCIFAR-100 ID (test)
ECE6.16
9
Confidence calibrationFMNIST ID (test)
ECE3.32
9
Confidence calibrationSVHN ID (test)
ECE1.67
9
Confidence calibrationMNIST ID (test)
ECE0.3
9
Out-of-Distribution DetectionSVHN
OOD Score (CIFAR-10)16.47
9
Showing 10 of 15 rows

Other info

Follow for update