Nesterov Accelerated Gradient and Scale Invariance for Adversarial Attacks
About
Deep learning models are vulnerable to adversarial examples crafted by applying human-imperceptible perturbations on benign inputs. However, under the black-box setting, most existing adversaries often have a poor transferability to attack other defense models. In this work, from the perspective of regarding the adversarial example generation as an optimization process, we propose two new methods to improve the transferability of adversarial examples, namely Nesterov Iterative Fast Gradient Sign Method (NI-FGSM) and Scale-Invariant attack Method (SIM). NI-FGSM aims to adapt Nesterov accelerated gradient into the iterative attacks so as to effectively look ahead and improve the transferability of adversarial examples. While SIM is based on our discovery on the scale-invariant property of deep learning models, for which we leverage to optimize the adversarial perturbations over the scale copies of the input images so as to avoid "overfitting" on the white-box model being attacked and generate more transferable adversarial examples. NI-FGSM and SIM can be naturally integrated to build a robust gradient-based attack to generate more transferable adversarial examples against the defense models. Empirical results on ImageNet dataset demonstrate that our attack methods exhibit higher transferability and achieve higher attack success rates than state-of-the-art gradient-based attacks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Adversarial Attack | ImageNet (val) | ASR (General)22.73 | 222 | |
| Adversarial Attack | ImageNet | Attack Success Rate61.4 | 178 | |
| Adversarial Attack | ImageNet (test) | -- | 101 | |
| Untargeted Adversarial Attack | CIFAR-10 (test) | ASR61.36 | 57 | |
| Adversarial Attack | ImageNet-1K | Inc-v3ens326.2 | 48 | |
| Adversarial Attack | ImageNet ILSVRC2012 (val) | Robust Accuracy (Inception v3)100 | 24 | |
| Untargeted Adversarial Attack | ImageNet-Compatible | Inc-v3 Performance100 | 24 | |
| Adversarial Attack | ImageNet 1k 1000 images | Robust Accuracy (Inc-v3)100 | 24 | |
| Adversarial Attack | AADD-LQ (surrogate) | ASR1 | 24 | |
| Adversarial Attack | ImageNet1k 1000 images subset | Model Accuracy (Inception v3)59.6 | 24 |