Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Strong Transferable Adversarial Attacks via Ensembled Asymptotically Normal Distribution Learning

About

Strong adversarial examples are crucial for evaluating and enhancing the robustness of deep neural networks. However, the performance of popular attacks is usually sensitive, for instance, to minor image transformations, stemming from limited information -- typically only one input example, a handful of white-box source models, and undefined defense strategies. Hence, the crafted adversarial examples are prone to overfit the source model, which hampers their transferability to unknown architectures. In this paper, we propose an approach named Multiple Asymptotically Normal Distribution Attacks (MultiANDA) which explicitly characterize adversarial perturbations from a learned distribution. Specifically, we approximate the posterior distribution over the perturbations by taking advantage of the asymptotic normality property of stochastic gradient ascent (SGA), then employ the deep ensemble strategy as an effective proxy for Bayesian marginalization in this process, aiming to estimate a mixture of Gaussians that facilitates a more thorough exploration of the potential optimization space. The approximated posterior essentially describes the stationary distribution of SGA iterations, which captures the geometric information around the local optimum. Thus, MultiANDA allows drawing an unlimited number of adversarial perturbations for each input and reliably maintains the transferability. Our proposed method outperforms ten state-of-the-art black-box attacks on deep learning models with or without defenses through extensive experiments on seven normally trained and seven defense models.

Zhengwei Fang, Rui Wang, Tao Huang, Liping Jing• 2022

Related benchmarks

TaskDatasetResultRank
Adversarial AttackImageNet (val)--
222
Adversarial AttackImageNet (test)
Success Rate56.3
101
Adversarial AttackImageNet-1K
Inc-v3ens379.7
48
Adversarial AttackImageNet1k 1000 images subset
Model Accuracy (Inception v3)96.5
24
Adversarial AttackImageNet 1k 1000 images
Robust Accuracy (Inc-v3)100
24
Adversarial AttackImageNet Clean
Success Rate53
15
Black-box Adversarial AttackImageNet (test)
Success Rate (Res34)100
13
Showing 7 of 7 rows

Other info

Code

Follow for update