Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Jailbreaking Multimodal Large Language Models via Shuffle Inconsistency

About

Multimodal Large Language Models (MLLMs) have achieved impressive performance and have been put into practical use in commercial applications, but they still have potential safety mechanism vulnerabilities. Jailbreak attacks are red teaming methods that aim to bypass safety mechanisms and discover MLLMs' potential risks. Existing MLLMs' jailbreak methods often bypass the model's safety mechanism through complex optimization methods or carefully designed image and text prompts. Despite achieving some progress, they have a low attack success rate on commercial closed-source MLLMs. Unlike previous research, we empirically find that there exists a Shuffle Inconsistency between MLLMs' comprehension ability and safety ability for the shuffled harmful instruction. That is, from the perspective of comprehension ability, MLLMs can understand the shuffled harmful text-image instructions well. However, they can be easily bypassed by the shuffled harmful instructions from the perspective of safety ability, leading to harmful responses. Then we innovatively propose a text-image jailbreak attack named SI-Attack. Specifically, to fully utilize the Shuffle Inconsistency and overcome the shuffle randomness, we apply a query-based black-box optimization method to select the most harmful shuffled inputs based on the feedback of the toxic judge model. A series of experiments show that SI-Attack can improve the attack's performance on three benchmarks. In particular, SI-Attack can obviously improve the attack success rate for commercial MLLMs such as GPT-4o or Claude-3.5-Sonnet.

Shiji Zhao, Ranjie Duan, Fengxiang Wang, Chi Chen, Caixin Kang, Shouwei Ruan, Jialing Tao, YueFeng Chen, Hui Xue, Xingxing Wei• 2025

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackHADES Self-harm
ASR32.67
15
Jailbreak AttackHADES Animals
ASR20.67
15
Jailbreak AttackHADES All categories
ASR48
15
Jailbreak AttackHADES Violence
ASR0.66
15
Jailbreak AttackHADES (test)
Self-harm Success Rate49.33
15
Jailbreak AttackHADES Privacy
ASR69.33
15
Jailbreak AttackHADES Financial
ASR66
15
Jailbreaking AttackMM-SafetyBench
Attack Success Rate (ASR)68.57
8
Showing 8 of 8 rows

Other info

Follow for update