Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training
About
In this paper we study leveraging confidence information induced by adversarial training to reinforce adversarial robustness of a given adversarially trained model. A natural measure of confidence is $\|F({\bf x})\|_\infty$ (i.e. how confident $F$ is about its prediction?). We start by analyzing an adversarial training formulation proposed by Madry et al.. We demonstrate that, under a variety of instantiations, an only somewhat good solution to their objective induces confidence to be a discriminator, which can distinguish between right and wrong model predictions in a neighborhood of a point sampled from the underlying distribution. Based on this, we propose Highly Confident Near Neighbor (${\tt HCNN}$), a framework that combines confidence information and nearest neighbor search, to reinforce adversarial robustness of a base model. We give algorithms in this framework and perform a detailed empirical study. We report encouraging experimental results that support our analysis, and also discuss problems we observed with existing adversarial training.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Adversarial Detection | CIFAR-10 clean (test) | TPR-9592.44 | 23 | |
| Adversarial Robustness (Rejection) | CIFAR-10 PGD-100, l_inf, 16/255 (test) | TPR-9545.12 | 15 | |
| Adversarial Robustness (Rejection) | CIFAR-10 PGD-100, l_2, 128/255 (test) | TPR-9569.1 | 15 | |
| Adversarial Robustness (Rejection) | CIFAR-10 PGD-100, l_inf, 8/255 (test) | TPR-9557.62 | 15 |