Out-of-Distribution Detection Using an Ensemble of Self Supervised Leave-out Classifiers
About
As deep learning methods form a critical part in commercially important applications such as autonomous driving and medical diagnostics, it is important to reliably detect out-of-distribution (OOD) inputs while employing these algorithms. In this work, we propose an OOD detection algorithm which comprises of an ensemble of classifiers. We train each classifier in a self-supervised manner by leaving out a random subset of training data as OOD data and the rest as in-distribution (ID) data. We propose a novel margin-based loss over the softmax output which seeks to maintain at least a margin $m$ between the average entropy of the OOD and in-distribution samples. In conjunction with the standard cross-entropy loss, we minimize the novel loss to train an ensemble of classifiers. We also propose a novel method to combine the outputs of the ensemble of classifiers to obtain OOD detection score and class prediction. Overall, our method convincingly outperforms Hendrycks et al.[7] and the current state-of-the-art ODIN[13] on several OOD detection benchmarks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Out-of-Distribution Detection | CIFAR-10 (in-distribution) TinyImageNet (out-of-distribution) (test) | AUROC99.36 | 71 | |
| Out-of-Distribution Detection | CIFAR-100 (in-distribution) / LSUN (out-of-distribution) (test) | AUROC96.77 | 67 | |
| Out-of-Distribution Detection | LSUN (Out-of-distribution) vs CIFAR-10 (In-distribution) | AUROC99.7 | 28 | |
| Out-of-Distribution Detection | CIFAR-10 Gaussian | AUROC99.58 | 11 | |
| Out-of-Distribution Detection | Tiny ImageNet (Out-of-distribution) vs CIFAR-100 (In-distribution) | -- | 10 | |
| Out-of-Distribution Detection | CIFAR-100 Gaussian | AUROC93.04 | 8 |