VisCRA: A Visual Chain Reasoning Attack for Jailbreaking Multimodal Large Language Models
About
The emergence of Multimodal Large Language Models (MLRMs) has enabled sophisticated visual reasoning capabilities by integrating reinforcement learning and Chain-of-Thought (CoT) supervision. However, while these enhanced reasoning capabilities improve performance, they also introduce new and underexplored safety risks. In this work, we systematically investigate the security implications of advanced visual reasoning in MLRMs. Our analysis reveals a fundamental trade-off: as visual reasoning improves, models become more vulnerable to jailbreak attacks. Motivated by this critical finding, we introduce VisCRA (Visual Chain Reasoning Attack), a novel jailbreak framework that exploits the visual reasoning chains to bypass safety mechanisms. VisCRA combines targeted visual attention masking with a two-stage reasoning induction strategy to precisely control harmful outputs. Extensive experiments demonstrate VisCRA's significant effectiveness, achieving high attack success rates on leading closed-source MLRMs: 76.48% on Gemini 2.0 Flash Thinking, 68.56% on QvQ-Max, and 56.60% on GPT-4o. Our findings highlight a critical insight: the very capability that empowers MLRMs -- their visual reasoning -- can also serve as an attack vector, posing significant security risks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Jailbreak Attack | HADES | Attack Success Rate65.87 | 59 | |
| Jailbreak Attack | MM-SafetyBench (tiny) | ASR82.2 | 25 | |
| Jailbreak Attack | HADES (test) | Self-harm Success Rate62.67 | 15 | |
| Jailbreak Attack | HADES Privacy | ASR92.67 | 15 | |
| Jailbreak Attack | HADES Financial | ASR91.33 | 15 | |
| Jailbreak Attack | HADES Violence | ASR0.6533 | 15 | |
| Jailbreak Attack | HADES Self-harm | ASR44.67 | 15 | |
| Jailbreak Attack | HADES Animals | ASR44 | 15 | |
| Jailbreak Attack | HADES All categories | ASR56.6 | 15 |