Learning Confidence for Out-of-Distribution Detection in Neural Networks
About
Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong. Closely related to this is the task of out-of-distribution detection, where a network must determine whether or not an input is outside of the set on which it is expected to safely perform. To jointly address these issues, we propose a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs. We demonstrate that on the task of out-of-distribution detection, our technique surpasses recently proposed techniques which construct confidence based on the network's output distribution, without requiring any additional labels or access to out-of-distribution examples. Additionally, we address the problem of calibrating out-of-distribution detectors, where we demonstrate that misclassified in-distribution examples can be used as a proxy for out-of-distribution examples.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Recognition | Something-Something v2 (test) | Top-1 Acc30.3 | 333 | |
| Anomaly Segmentation | Fishyscapes Lost & Found (test) | FPR@9522.11 | 61 | |
| Anomaly Segmentation | Fishyscapes Static (test) | FPR9519.4 | 28 | |
| Anomaly Detection | Fishyscapes Static | AP45 | 27 | |
| Anomaly Detection | Fishyscapes Lost & Found | AP10.3 | 27 | |
| Action Recognition | UCF-101 1.0 (test) | Top-1 Acc88.8 | 23 | |
| Action Recognition | Kinetics 400/600 (val) | Top-12 Accuracy67 | 23 | |
| Anomaly Detection | Fishyscapes Web Oct 20 | AP43 | 17 | |
| Anomaly Detection | Fishyscapes Web Jan 20 | AP42.8 | 13 | |
| Anomaly Segmentation | BDDAnomaly | AUPR3.9 | 12 |