Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diversity can be Transferred: Output Diversification for White- and Black-box Attacks

About

Adversarial attacks often involve random perturbations of the inputs drawn from uniform or Gaussian distributions, e.g., to initialize optimization-based white-box attacks or generate update directions in black-box attacks. These simple perturbations, however, could be sub-optimal as they are agnostic to the model being attacked. To improve the efficiency of these attacks, we propose Output Diversified Sampling (ODS), a novel sampling strategy that attempts to maximize diversity in the target model's outputs among the generated samples. While ODS is a gradient-based strategy, the diversity offered by ODS is transferable and can be helpful for both white-box and black-box attacks via surrogate models. Empirically, we demonstrate that ODS significantly improves the performance of existing white-box and black-box attacks. In particular, ODS reduces the number of queries needed for state-of-the-art black-box attacks on ImageNet by a factor of two.

Yusuke Tashiro, Yang Song, Stefano Ermon• 2020

Related benchmarks

TaskDatasetResultRank
Untargeted Adversarial AttackVGG-19
Fooling Rate99.9
5
Untargeted Adversarial AttackDenseNet-121
Fooling Rate99
5
Untargeted Adversarial AttackResNext-50
Fooling Rate98.4
5
Targeted Adversarial AttackVGG-19
Fooling Rate49
4
Targeted Adversarial AttackDenseNet-121
Fooling Rate49.7
4
Targeted Adversarial AttackResNext-50
Fooling Rate42.7
4
Showing 6 of 6 rows

Other info

Follow for update