Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Adversarial Robustness on In- and Out-Distribution Improves Explainability

About

Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions. In this work we propose RATIO, a training procedure for Robustness via Adversarial Training on In- and Out-distribution, which leads to robust models with reliable and robust confidence estimates on the out-distribution. RATIO has similar generative properties to adversarial training so that visual counterfactuals produce class specific features. While adversarial training comes at the price of lower clean accuracy, RATIO achieves state-of-the-art $l_2$-adversarial robustness on CIFAR10 and maintains better clean accuracy.

Maximilian Augustin, Alexander Meinke, Matthias Hein• 2020

Related benchmarks

TaskDatasetResultRank
Image ClassificationCIFAR-10 (test)
Accuracy (Clean)92.23
273
Generative ModelingCIFAR-10
FID21.96
27
OOD DetectionCIFAR-10 (test)
Clean AUROC0.797
27
ClassificationCIFAR-10
Accuracy92.23
15
Image ClassificationCIFAR-10 l2 threat model, epsilon=0.5 (test)
Standard Accuracy93.96
11
ClassificationCIFAR-100
Accuracy71.58
7
Generative ModelingCIFAR-100
Fréchet Inception Distance (FID)24.17
7
Image ClassificationCIFAR-10 L-2 norm, epsilon=0.5 (test)
Accuracy (Standard)93.96
6
Showing 8 of 8 rows

Other info

Follow for update