Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VisCRA: A Visual Chain Reasoning Attack for Jailbreaking Multimodal Large Language Models

About

The emergence of Multimodal Large Language Models (MLRMs) has enabled sophisticated visual reasoning capabilities by integrating reinforcement learning and Chain-of-Thought (CoT) supervision. However, while these enhanced reasoning capabilities improve performance, they also introduce new and underexplored safety risks. In this work, we systematically investigate the security implications of advanced visual reasoning in MLRMs. Our analysis reveals a fundamental trade-off: as visual reasoning improves, models become more vulnerable to jailbreak attacks. Motivated by this critical finding, we introduce VisCRA (Visual Chain Reasoning Attack), a novel jailbreak framework that exploits the visual reasoning chains to bypass safety mechanisms. VisCRA combines targeted visual attention masking with a two-stage reasoning induction strategy to precisely control harmful outputs. Extensive experiments demonstrate VisCRA's significant effectiveness, achieving high attack success rates on leading closed-source MLRMs: 76.48% on Gemini 2.0 Flash Thinking, 68.56% on QvQ-Max, and 56.60% on GPT-4o. Our findings highlight a critical insight: the very capability that empowers MLRMs -- their visual reasoning -- can also serve as an attack vector, posing significant security risks.

Bingrui Sima, Linhua Cong, Wenxuan Wang, Kun He• 2025

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHADES
Attack Success Rate65.87
59
Jailbreak AttackMM-SafetyBench (tiny)
ASR82.2
25
Jailbreak AttackHADES (test)
Self-harm Success Rate62.67
15
Jailbreak AttackHADES Privacy
ASR92.67
15
Jailbreak AttackHADES Financial
ASR91.33
15
Jailbreak AttackHADES Violence
ASR0.6533
15
Jailbreak AttackHADES Self-harm
ASR44.67
15
Jailbreak AttackHADES Animals
ASR44
15
Jailbreak AttackHADES All categories
ASR56.6
15
Showing 9 of 9 rows

Other info

Follow for update