Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Confidence for Out-of-Distribution Detection in Neural Networks

About

Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong. Closely related to this is the task of out-of-distribution detection, where a network must determine whether or not an input is outside of the set on which it is expected to safely perform. To jointly address these issues, we propose a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs. We demonstrate that on the task of out-of-distribution detection, our technique surpasses recently proposed techniques which construct confidence based on the network's output distribution, without requiring any additional labels or access to out-of-distribution examples. Additionally, we address the problem of calibrating out-of-distribution detectors, where we demonstrate that misclassified in-distribution examples can be used as a proxy for out-of-distribution examples.

Terrance DeVries, Graham W. Taylor• 2018

Related benchmarks

TaskDatasetResultRank
Action RecognitionSomething-Something v2 (test)
Top-1 Acc30.3
333
Anomaly SegmentationFishyscapes Lost & Found (test)
FPR@9522.11
61
Anomaly SegmentationFishyscapes Static (test)
FPR9519.4
28
Anomaly DetectionFishyscapes Static
AP45
27
Anomaly DetectionFishyscapes Lost & Found
AP10.3
27
Action RecognitionUCF-101 1.0 (test)
Top-1 Acc88.8
23
Action RecognitionKinetics 400/600 (val)
Top-12 Accuracy67
23
Anomaly DetectionFishyscapes Web Oct 20
AP43
17
Anomaly DetectionFishyscapes Web Jan 20
AP42.8
13
Anomaly SegmentationBDDAnomaly
AUPR3.9
12
Showing 10 of 13 rows

Other info

Follow for update