Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Explainable Deep One-Class Classification

About

Deep one-class classification variants for anomaly detection learn a mapping that concentrates nominal samples in feature space causing anomalies to be mapped away. Because this transformation is highly non-linear, finding interpretations poses a significant challenge. In this paper we present an explainable deep one-class classification method, Fully Convolutional Data Description (FCDD), where the mapped samples are themselves also an explanation heatmap. FCDD yields competitive detection performance and provides reasonable explanations on common anomaly detection benchmarks with CIFAR-10 and ImageNet. On MVTec-AD, a recent manufacturing dataset offering ground-truth anomaly maps, FCDD sets a new state of the art in the unsupervised setting. Our method can incorporate ground-truth anomaly maps during training and using even a few of these (~5) improves performance significantly. Finally, using FCDD's explanations we demonstrate the vulnerability of deep one-class classification models to spurious image features such as image watermarks.

Philipp Liznerski, Lukas Ruff, Robert A. Vandermeulen, Billy Joe Franks, Marius Kloft, Klaus-Robert M\"uller• 2020

Related benchmarks

TaskDatasetResultRank
Anomaly LocalizationMVTec AD
Pixel AUROC92
369
Anomaly DetectionMVTec-AD (test)
I-AUROC95.7
226
Anomaly LocalizationMVTec-AD (test)
Pixel AUROC92.1
181
Anomaly LocalizationMVTec
AUC98
70
Anomaly DetectionMVTec AD 1.0 (test)
Image AUROC96.5
57
Anomaly LocalizationMVTec AD 1.0 (test)
AUROC (Pixel)96.6
47
Anomaly DetectionFashionMNIST (test)
ROCAUC0.89
35
Anomaly SegmentationMVTec AD--
33
Anomaly DetectionCIFAR-10 one-for-all
AUROC78.9
14
Anomaly LocalizationMVTec-AD 2019
Bottle97
10
Showing 10 of 15 rows

Other info

Code

Follow for update