Cross-modality Information Check for Detecting Jailbreaking in Multimodal Large Language Models
About
Multimodal Large Language Models (MLLMs) extend the capacity of LLMs to understand multimodal information comprehensively, achieving remarkable performance in many vision-centric tasks. Despite that, recent studies have shown that these models are susceptible to jailbreak attacks, which refer to an exploitative technique where malicious users can break the safety alignment of the target model and generate misleading and harmful answers. This potential threat is caused by both the inherent vulnerabilities of LLM and the larger attack scope introduced by vision input. To enhance the security of MLLMs against jailbreak attacks, researchers have developed various defense techniques. However, these methods either require modifications to the model's internal structure or demand significant computational resources during the inference phase. Multimodal information is a double-edged sword. While it increases the risk of attacks, it also provides additional data that can enhance safeguards. Inspired by this, we propose Cross-modality Information DEtectoR (CIDER), a plug-and-play jailbreaking detector designed to identify maliciously perturbed image inputs, utilizing the cross-modal similarity between harmful queries and adversarial images. CIDER is independent of the target MLLMs and requires less computation cost. Extensive experimental results demonstrate the effectiveness and efficiency of CIDER, as well as its transferability to both white-box and black-box MLLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text-based Jailbreak | AdvBench-M OOD | ASR (OOD)61.3 | 16 | |
| Text-based Jailbreak | JailbreakV_28K IND | Attack Success Rate (ASR)48.53 | 16 | |
| Direct Malicious | VLSafe OOD | ASR50 | 16 | |
| Image-based Jailbreak | HADES OOD | Attack Success Rate (ASR)51.86 | 16 | |
| Image-based Jailbreak | JailbreakV_28K IND | ASR37.2 | 16 | |
| Malicious Prompt Detection | JailbreakV_28K Image-based (test) | FNR37.2 | 16 | |
| Direct Malicious | MM-SafetyBench OOD | ASR46.91 | 16 | |
| Image-based Jailbreak | FigStep OOD | ASR40.03 | 16 | |
| Malicious Prompt Detection | JailbreakV_28K Text-based (test) | FNR48.53 | 16 | |
| Computational Efficiency | Malicious Prompt Detection Benchmarks | Detection Time (s)1.42 | 14 |