Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Deep Anomaly Detection with Outlier Exposure

About

It is important to detect anomalous inputs when deploying machine learning systems. The use of larger and more complex inputs in deep learning magnifies the difficulty of distinguishing between anomalous and in-distribution examples. At the same time, diverse image and text data are available in enormous quantities. We propose leveraging these data to improve deep anomaly detection by training anomaly detectors against an auxiliary dataset of outliers, an approach we call Outlier Exposure (OE). This enables anomaly detectors to generalize and detect unseen anomalies. In extensive experiments on natural language processing and small- and large-scale vision tasks, we find that Outlier Exposure significantly improves detection performance. We also observe that cutting-edge generative models trained on CIFAR-10 may assign higher likelihoods to SVHN images than to CIFAR-10 images; we use OE to mitigate this issue. We also analyze the flexibility and robustness of Outlier Exposure, and identify characteristics of the auxiliary dataset that improve performance.

Dan Hendrycks, Mantas Mazeika, Thomas Dietterich• 2018

Related benchmarks

TaskDatasetResultRank
Image ClassificationImageNet-1K 1.0 (val)
Top-1 Accuracy75.71
1952
Image ClassificationImageNet-1K
Top-1 Acc75.51
1239
Image ClassificationCIFAR-100
Accuracy76.87
691
Image ClassificationCIFAR-10
Accuracy94.83
507
Out-of-Distribution DetectionSUN OOD with ImageNet-1k In-distribution (test)
FPR@9552.6
204
Out-of-Distribution DetectionTextures
AUROC0.9773
168
Out-of-Distribution DetectionPlaces
FPR9519.07
142
Out-of-Distribution DetectionImageNet OOD Average 1k (test)
FPR@9522.97
137
Out-of-Distribution DetectionImageNet-1k ID iNaturalist OOD
FPR957.92
132
Anomaly DetectionCIFAR-10--
130
Showing 10 of 257 rows
...

Other info

Code

Follow for update