Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Seeing No Evil: Blinding Large Vision-Language Models to Safety Instructions via Adversarial Attention Hijacking

About

Large Vision-Language Models (LVLMs) rely on attention-based retrieval of safety instructions to maintain alignment during generation. Existing attacks typically optimize image perturbations to maximize harmful output likelihood, but suffer from slow convergence due to gradient conflict between adversarial objectives and the model's safety-retrieval mechanism. We propose Attention-Guided Visual Jailbreaking, which circumvents rather than overpowers safety alignment by directly manipulating attention patterns. Our method introduces two simple auxiliary objectives: (1) suppressing attention to alignment-relevant prefix tokens and (2) anchoring generation on adversarial image features. This simple yet effective push-pull formulation reduces gradient conflict by 45% and achieves 94.4% attack success rate on Qwen-VL (vs. 68.8% baseline) with 40% fewer iterations. At tighter perturbation budgets ($\epsilon=8/255$), we maintain 59.0% ASR compared to 45.7% for standard methods. Mechanistic analysis reveals a failure mode we term safety blindness: successful attacks suppress system-prompt attention by 80%, causing models to generate harmful content not by overriding safety rules, but by failing to retrieve them.

Jingru Li, Wei Ren, Tianqing Zhu• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHarmBench--
487
Jailbreak AttackStrongREJECT
Attack Success Rate38.9
138
Jailbreak AttackJailbreakBench
ASR36.2
76
Jailbreak AttackAdvBench
Attack Success Rate (ASR)12.1
48
JailbreakingStrongREJECT
ASR (Detoxify)0.3
20
JailbreakingJailbreakBench
ASR (Detoxify)0.00e+0
20
JailbreakingAdvBench
ASR (Detoxify)0.2
20
JailbreakingHarmBench Transfer attack
Average Success Rate58.7
14
Showing 8 of 8 rows

Other info

Follow for update