Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Crafting Adversarial Inputs for Large Vision-Language Models Using Black-Box Optimization

About

Recent advancements in Large Vision-Language Models (LVLMs) have shown groundbreaking capabilities across diverse multimodal tasks. However, these models remain vulnerable to adversarial jailbreak attacks, where adversaries craft subtle perturbations to bypass safety mechanisms and trigger harmful outputs. Existing white-box attacks methods require full model accessibility, suffer from computing costs and exhibit insufficient adversarial transferability, making them impractical for real-world, black-box settings. To address these limitations, we propose a black-box jailbreak attack on LVLMs via Zeroth-Order optimization using Simultaneous Perturbation Stochastic Approximation (ZO-SPSA). ZO-SPSA provides three key advantages: (i) gradient-free approximation by input-output interactions without requiring model knowledge, (ii) model-agnostic optimization without the surrogate model and (iii) lower resource requirements with reduced GPU memory consumption. We evaluate ZO-SPSA on three LVLMs, including InstructBLIP, LLaVA and MiniGPT-4, achieving the highest jailbreak success rate of 83.0% on InstructBLIP, while maintaining imperceptible perturbations comparable to white-box methods. Moreover, adversarial examples generated from MiniGPT-4 exhibit strong transferability to other LVLMs, with ASR reaching 64.18%. These findings underscore the real-world feasibility of black-box jailbreaks and expose critical weaknesses in the safety mechanisms of current LVLMs

Jiwei Guan, Haibo Jin, Haohan Wang• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackAdvBench (test)--
22
Jailbreak attack success rateVAJA
Identity Attack Success Rate92.3
15
Toxicity GenerationRealToxicityPrompts (test)
Perspective API Score46.67
12
Toxicity GenerationVAJA (test)
Toxicity Score (Perspective API)17.9
9
Showing 4 of 4 rows

Other info

Follow for update