Learning Confidence for Out-of-Distribution Detection in Neural Networks
About
Modern neural networks are very powerful predictive models, but they are often incapable of recognizing when their predictions may be wrong. Closely related to this is the task of out-of-distribution detection, where a network must determine whether or not an input is outside of the set on which it is expected to safely perform. To jointly address these issues, we propose a method of learning confidence estimates for neural networks that is simple to implement and produces intuitively interpretable outputs. We demonstrate that on the task of out-of-distribution detection, our technique surpasses recently proposed techniques which construct confidence based on the network's output distribution, without requiring any additional labels or access to out-of-distribution examples. Additionally, we address the problem of calibrating out-of-distribution detectors, where we demonstrate that misclassified in-distribution examples can be used as a proxy for out-of-distribution examples.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Action Recognition | Something-Something v2 (test) | Top-1 Acc30.3 | 333 | |
| Near-OOD Detection | CIFAR-100 Near-OOD (test) | AUROC71.6 | 93 | |
| OOD Detection | CIFAR-10 | FPR@9521.48 | 85 | |
| Near-OOD Detection | CIFAR-10 | AUROC89.84 | 71 | |
| OOD Detection | CIFAR100 Dfar | AUROC68.9 | 69 | |
| Anomaly Segmentation | Fishyscapes Lost & Found (test) | FPR@9522.11 | 61 | |
| Near-OOD Detection | ImageNet-200 | AUROC79.1 | 36 | |
| Far OOD detection | Average (CIFAR-10, CIFAR-100, TinyImageNet) | AUROC84.06 | 35 | |
| Near-OOD Detection | CIFAR-10, CIFAR-100, TinyImageNet Average | AUROC80.18 | 35 | |
| Far OOD detection | TinyImageNet | AUROC90.43 | 34 |