Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Generation of Targeted and Transferable Adversarial Examples for Vision-Language Models Via Diffusion Models

About

Adversarial attacks, particularly \textbf{targeted} transfer-based attacks, can be used to assess the adversarial robustness of large visual-language models (VLMs), allowing for a more thorough examination of potential security flaws before deployment. However, previous transfer-based adversarial attacks incur high costs due to high iteration counts and complex method structure. Furthermore, due to the unnaturalness of adversarial semantics, the generated adversarial examples have low transferability. These issues limit the utility of existing methods for assessing robustness. To address these issues, we propose AdvDiffVLM, which uses diffusion models to generate natural, unrestricted and targeted adversarial examples via score matching. Specifically, AdvDiffVLM uses Adaptive Ensemble Gradient Estimation to modify the score during the diffusion model's reverse generation process, ensuring that the produced adversarial examples have natural adversarial targeted semantics, which improves their transferability. Simultaneously, to improve the quality of adversarial examples, we use the GradCAM-guided Mask method to disperse adversarial semantics throughout the image rather than concentrating them in a single area. Finally, AdvDiffVLM embeds more target semantics into adversarial examples after multiple iterations. Experimental results show that our method generates adversarial examples 5x to 10x faster than state-of-the-art transfer-based adversarial attacks while maintaining higher quality adversarial examples. Furthermore, compared to previous transfer-based adversarial attacks, the adversarial examples generated by our method have better transferability. Notably, AdvDiffVLM can successfully attack a variety of commercial VLMs in a black-box environment, including GPT-4V.

Qi Guo, Shanmin Pang, Xiaojun Jia, Yang Liu, Qing Guo• 2024

Related benchmarks

TaskDatasetResultRank
Black-box AttackVLM Evaluation Set (test)
Ensemble Success Rate56.75
96
Black-box Adversarial AttackClaude thinking 4.0
KMR (a)0.04
9
Black-box Adversarial AttackGPT-5
KMRa4
9
Black-box Adversarial AttackGemini 2.5-Pro
KMRa0.03
9
Imperceptibility EvaluationBlack-Box LVLM Attack Set
L1 Distance0.064
9
Visual Quality AssessmentAdversarial Examples Visual Quality Evaluation
SSIM0.69
8
Showing 6 of 6 rows

Other info

Follow for update