Adversarial Robustness on In- and Out-Distribution Improves Explainability
About
Neural networks have led to major improvements in image classification but suffer from being non-robust to adversarial changes, unreliable uncertainty estimates on out-distribution samples and their inscrutable black-box decisions. In this work we propose RATIO, a training procedure for Robustness via Adversarial Training on In- and Out-distribution, which leads to robust models with reliable and robust confidence estimates on the out-distribution. RATIO has similar generative properties to adversarial training so that visual counterfactuals produce class specific features. While adversarial training comes at the price of lower clean accuracy, RATIO achieves state-of-the-art $l_2$-adversarial robustness on CIFAR10 and maintains better clean accuracy.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Image Classification | CIFAR-10 (test) | Accuracy (Clean)92.23 | 273 | |
| Generative Modeling | CIFAR-10 | FID21.96 | 27 | |
| OOD Detection | CIFAR-10 (test) | Clean AUROC0.797 | 27 | |
| Classification | CIFAR-10 | Accuracy92.23 | 15 | |
| Image Classification | CIFAR-10 l2 threat model, epsilon=0.5 (test) | Standard Accuracy93.96 | 11 | |
| Classification | CIFAR-100 | Accuracy71.58 | 7 | |
| Generative Modeling | CIFAR-100 | Fréchet Inception Distance (FID)24.17 | 7 | |
| Image Classification | CIFAR-10 L-2 norm, epsilon=0.5 (test) | Accuracy (Standard)93.96 | 6 |