Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem

About

Classifiers used in the wild, in particular for safety-critical systems, should not only have good generalization properties but also should know when they don't know, in particular make low confidence predictions far away from the training data. We show that ReLU type neural networks which yield a piecewise linear classifier function fail in this regard as they produce almost always high confidence predictions far away from the training data. For bounded domains like images we propose a new robust optimization technique similar to adversarial training which enforces low confidence predictions far away from the training data. We show that this technique is surprisingly effective in reducing the confidence of predictions far away from the training data while maintaining high confidence predictions and test error on the original classification task compared to standard training.

Matthias Hein, Maksym Andriushchenko, Julian Bitterwolf• 2018

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectioniNaturalist
FPR@9550.67
200
Out-of-Distribution DetectionTextures--
141
Out-of-Distribution DetectionPlaces
FPR9570.83
110
Out-of-Distribution DetectionSUN
FPR@9568.36
71
Out-of-Distribution DetectionCIFAR-100 In-distribution vs Smooth (OOD)
AUC99.9
22
Out-of-Distribution DetectionMNIST--
13
Out-of-Distribution DetectionFMNIST--
13
Confidence calibrationMNIST ID (test)
ECE0.14
9
Out-of-Distribution DetectionCIFAR-100
SVHN Score86.46
9
Out-of-Distribution DetectionSVHN
OOD Score (CIFAR-10)45.61
9
Showing 10 of 27 rows

Other info

Follow for update