Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Boosting the Transferability of Adversarial Attacks with Reverse Adversarial Perturbation

About

Deep neural networks (DNNs) have been shown to be vulnerable to adversarial examples, which can produce erroneous predictions by injecting imperceptible perturbations. In this work, we study the transferability of adversarial examples, which is significant due to its threat to real-world applications where model architecture or parameters are usually unknown. Many existing works reveal that the adversarial examples are likely to overfit the surrogate model that they are generated from, limiting its transfer attack performance against different target models. To mitigate the overfitting of the surrogate model, we propose a novel attack method, dubbed reverse adversarial perturbation (RAP). Specifically, instead of minimizing the loss of a single adversarial point, we advocate seeking adversarial example located at a region with unified low loss value, by injecting the worst-case perturbation (the reverse adversarial perturbation) for each step of the optimization procedure. The adversarial attack with RAP is formulated as a min-max bi-level optimization problem. By integrating RAP into the iterative process for attacks, our method can find more stable adversarial examples which are less sensitive to the changes of decision boundary, mitigating the overfitting of the surrogate model. Comprehensive experimental comparisons demonstrate that RAP can significantly boost adversarial transferability. Furthermore, RAP can be naturally combined with many existing black-box attack techniques, to further boost the transferability. When attacking a real-world image recognition system, Google Cloud Vision API, we obtain 22% performance improvement of targeted attacks over the compared method. Our codes are available at https://github.com/SCLBD/Transfer_attack_RAP.

Zeyu Qin, Yanbo Fan, Yi Liu, Li Shen, Yong Zhang, Jue Wang, Baoyuan Wu• 2022

Related benchmarks

TaskDatasetResultRank
Adversarial AttackImageNet (val)
ASR (General)44.45
222
Untargeted Adversarial AttackImageNet-1k (val)
ASR100
57
Untargeted Adversarial AttackCIFAR-10 (test)
ASR78.43
57
Targeted Adversarial AttackImageNet
Dense-121 Score88.5
31
Untargeted Adversarial AttackImageNet (test)
ASR (Inc-v3)24.9
26
Untargeted Adversarial AttackImageNet-Compatible
Inc-v3 Performance99.9
24
Targeted Adversarial AttackImageNet 10-Targets (all-source)
Targeted Attack Success Rate95.7
15
Black-box Adversarial AttackImageNet (test)
Success Rate (Res34)100
13
Targeted Adversarial AttackImageNet 1k (test)
Model Performance (IncRes-v2)90.4
6
Untargeted Adversarial AttackImageNet 1k (test)
ASR (IncRes-v2)100
6
Showing 10 of 20 rows

Other info

Code

Follow for update