Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How Worst-Case Are Adversarial Attacks? Linking Adversarial and Perturbation Robustness

About

Adversarial attacks are widely used to identify model vulnerabilities; however, their validity as proxies for robustness to random perturbations remains debated. We ask whether an adversarial example provides a representative estimate of misprediction risk under stochastic perturbations of the same magnitude, or instead reflects an atypical worst-case event. To address this question, we introduce a probabilistic analysis that quantifies this risk with respect to directionally biased perturbation distributions, parameterized by a concentration factor $\kappa$ that interpolates between isotropic noise and adversarial directions. Building on this, we study the limits of this connection by proposing an attack strategy designed to probe vulnerabilities in regimes that are statistically closer to uniform noise. Experiments on ImageNet and CIFAR-10 systematically benchmark multiple attacks, revealing when adversarial success meaningfully reflects robustness to perturbations and when it does not, thereby informing their use in safety-oriented robustness evaluation.

Giulio Rossolini• 2026

Related benchmarks

TaskDatasetResultRank
Adversarial RobustnessCIFAR-10 (test)
Attack Success Rate (ASR)58.5
76
Showing 1 of 1 rows

Other info

Follow for update