Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning Confidence for Out-of-Distribution Detection in Neural Networks

About

Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong. Closely related to this is the task of out-of-distribution detection, where a network must determine whether or not an input is outside of the set on which it is expected to safely perform. To jointly address these issues, we propose a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs. We demonstrate that on the task of out-of-distribution detection, our technique surpasses recently proposed techniques which construct confidence based on the network's output distribution, without requiring any additional labels or access to out-of-distribution examples. Additionally, we address the problem of calibrating out-of-distribution detectors, where we demonstrate that misclassified in-distribution examples can be used as a proxy for out-of-distribution examples.

Terrance DeVries, Graham W. Taylor• 2018

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (test)
Top-1 Acc30.3
333
Near-OOD DetectionCIFAR-100 Near-OOD (test)
AUROC71.6
93
OOD DetectionCIFAR-10
FPR@9521.48
85
Near-OOD DetectionCIFAR-10
AUROC89.84
71
OOD DetectionCIFAR100 Dfar
AUROC68.9
69
Anomaly SegmentationFishyscapes Lost & Found (test)
FPR@9522.11
61
Near-OOD DetectionImageNet-200
AUROC79.1
36
Far OOD detectionAverage (CIFAR-10, CIFAR-100, TinyImageNet)
AUROC84.06
35
Near-OOD DetectionCIFAR-10, CIFAR-100, TinyImageNet Average
AUROC80.18
35
Far OOD detectionTinyImageNet
AUROC90.43
34
Showing 10 of 26 rows

Other info

Follow for update