Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Boosting Adversarial Transferability through Enhanced Momentum

About

Deep learning models are known to be vulnerable to adversarial examples crafted by adding human-imperceptible perturbations on benign images. Many existing adversarial attack methods have achieved great white-box attack performance, but exhibit low transferability when attacking other models. Various momentum iterative gradient-based methods are shown to be effective to improve the adversarial transferability. In what follows, we propose an enhanced momentum iterative gradient-based method to further enhance the adversarial transferability. Specifically, instead of only accumulating the gradient during the iterative process, we additionally accumulate the average gradient of the data points sampled in the gradient direction of the previous iteration so as to stabilize the update direction and escape from poor local maxima. Extensive experiments on the standard ImageNet dataset demonstrate that our method could improve the adversarial transferability of momentum-based methods by a large margin of 11.1% on average. Moreover, by incorporating with various input transformation methods, the adversarial transferability could be further improved significantly. We also attack several extra advanced defense models under the ensemble-model setting, and the enhancements are remarkable with at least 7.8% on average.

Xiaosen Wang, Jiadong Lin, Han Hu, Jingdong Wang, Kun He• 2021

Related benchmarks

TaskDatasetResultRank
Adversarial AttackImageNet (val)
ASR (General)31.73
222
Untargeted Adversarial AttackCIFAR-10 (test)
ASR74.36
57
Untargeted Adversarial AttackImageNet-Compatible
Inc-v3 Performance100
24
Untargeted Adversarial AttackImageNet-compatible (test)
Acc (Inc-v3)100
6
Untargeted Adversarial AttackImageNet
ComDefend Robustness78.2
6
Showing 5 of 5 rows

Other info

Follow for update