Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting Out-of-Distribution Inputs to Deep Generative Models Using Typicality

About

Recent work has shown that deep generative models can assign higher likelihood to out-of-distribution data sets than to their training data (Nalisnick et al., 2019; Choi et al., 2019). We posit that this phenomenon is caused by a mismatch between the model's typical set and its areas of high probability density. In-distribution inputs should reside in the former but not necessarily in the latter, as previous work has presumed. To determine whether or not inputs reside in the typical set, we propose a statistically principled, easy-to-implement test using the empirical distribution of model likelihoods. The test is model agnostic and widely applicable, only requiring that the likelihood can be computed or closely approximated. We report experiments showing that our procedure can successfully detect the out-of-distribution sets in several of the challenging cases reported by Nalisnick et al. (2019).

Eric Nalisnick, Akihiro Matsukawa, Yee Whye Teh, Balaji Lakshminarayanan• 2019

Related benchmarks

TaskDatasetResultRank
Out-of-Distribution DetectionCIFAR-10 vs SVHN (test)
AUROC87
101
Out-of-Distribution DetectionCIFAR-10 vs CIFAR-100 (test)
AUROC54.8
93
OOD DetectionCIFAR-10 (IND) SVHN (OOD)
AUROC0.653
91
Out-of-Distribution DetectionCIFAR-100 SVHN in-distribution out-of-distribution (test)
AUROC87.83
90
Out-of-Distribution DetectionCIFAR-10 (ID) vs SVHN (OOD) (test)
AUROC42
79
OOD DetectionCIFAR-100 IND SVHN OOD
AUROC (%)67.64
74
Out-of-Distribution DetectionFashionMNIST (ID) vs MNIST (OoD)
AUROC0.7365
61
Out-of-Distribution DetectionSVHN CIFAR-10 in-distribution out-of-distribution (test)
AUROC99.69
56
Out-of-Distribution DetectionCIFAR-10 (ID) vs Celeb-A (OOD)
AUROC92.53
55
Out-of-Distribution DetectionCIFAR-10 SVHN in-distribution out-of-distribution standard (test)
AUROC88.66
31
Showing 10 of 19 rows

Other info

Follow for update