RaPA: Enhancing Transferable Targeted Attacks via Random Parameter Pruning
About
Compared to untargeted attacks, targeted transfer-based attack is still suffering from much lower Attack Success Rates (ASRs), although significant improvements have been achieved by kinds of methods, such as diversifying input, stabilizing the gradient, and re-training surrogate models. In this paper, we find that adversarial examples generated by existing methods rely heavily on a small subset of surrogate model parameters, which in turn limits their transferability to unseen target models. Inspired by this, we propose the Random Parameter Pruning Attack (RaPA), which introduces parameter-level randomization during the attack process. At each optimization step, RaPA randomly prunes model parameters to generate diverse yet semantically consistent surrogate variants.We show this parameter-level randomization is equivalent to adding an importance-equalization regularizer, thereby alleviating the over-reliance issue. Extensive experiments across both CNN and Transformer architectures demonstrate that RaPA substantially enhances transferability. In the challenging case of transferring from CNN-based to Transformer-based models, RaPA achieves up to 11.7% higher average ASRs than state-of-the-art baselines(with 33.3% ASRs), while being training-free, cross-architecture efficient, and easily integrated into existing attack frameworks. Code is available in https://github.com/molarsu/RaPA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Targeted Adversarial Attack | ImageNet-Compatible | Avg Success Rate89 | 73 | |
| Adversarial Attack Transferability | ImageNet-Compatible | Transferability on ViT99.6 | 29 | |
| Adversarial Attack Transferability | ImageNet-compatible (test) | RN1851.3 | 22 | |
| Adversarial Attack | ImageNet-Compatible | HGD Score25.7 | 19 | |
| Adversarial Attack | ImageNet V2 | ASR91 | 12 | |
| Transferable Adversarial Attack | ImageNet | Performance (ViT)50.8 | 5 |