Images are Achilles' Heel of Alignment: Exploiting Visual Vulnerabilities for Jailbreaking Multimodal Large Language Models
About
In this paper, we study the harmlessness alignment problem of multimodal large language models (MLLMs). We conduct a systematic empirical analysis of the harmlessness performance of representative MLLMs and reveal that the image input poses the alignment vulnerability of MLLMs. Inspired by this, we propose a novel jailbreak method named HADES, which hides and amplifies the harmfulness of the malicious intent within the text input, using meticulously crafted images. Experimental results show that HADES can effectively jailbreak existing MLLMs, which achieves an average Attack Success Rate (ASR) of 90.26% for LLaVA-1.5 and 71.60% for Gemini Pro Vision. Our code and data are available at https://github.com/RUCAIBox/HADES.
Yifan Li, Hangyu Guo, Kun Zhou, Wayne Xin Zhao, Ji-Rong Wen• 2024
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Jailbreak Attack | SafeBench | ASR6 | 112 | |
| Jailbreak Defense | JBB-Behaviors | ASR1 | 101 | |
| Jailbreak Attack | Safebench (test) | IA ASR72 | 20 | |
| Jailbreak Attack | Safety Evaluation Benchmark Harmful Categories | ASR (IA)12 | 20 | |
| Multimodal Jailbreaking | HADES-Dataset | ASR (%)40.93 | 20 | |
| Jailbreak Attack | HADES | Success Rate (Animal)10 | 18 | |
| Jailbreak Attack | HADES Self-harm | ASR5.33 | 15 | |
| Jailbreak Attack | HADES Animals | ASR3.33 | 15 | |
| Jailbreak Attack | HADES Violence | ASR0.3 | 15 | |
| Jailbreak Attack | HADES All categories | ASR18.93 | 15 |
Showing 10 of 15 rows