Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Cross-modality Information Check for Detecting Jailbreaking in Multimodal Large Language Models

About

Multimodal Large Language Models (MLLMs) extend the capacity of LLMs to understand multimodal information comprehensively, achieving remarkable performance in many vision-centric tasks. Despite that, recent studies have shown that these models are susceptible to jailbreak attacks, which refer to an exploitative technique where malicious users can break the safety alignment of the target model and generate misleading and harmful answers. This potential threat is caused by both the inherent vulnerabilities of LLM and the larger attack scope introduced by vision input. To enhance the security of MLLMs against jailbreak attacks, researchers have developed various defense techniques. However, these methods either require modifications to the model's internal structure or demand significant computational resources during the inference phase. Multimodal information is a double-edged sword. While it increases the risk of attacks, it also provides additional data that can enhance safeguards. Inspired by this, we propose Cross-modality Information DEtectoR (CIDER), a plug-and-play jailbreaking detector designed to identify maliciously perturbed image inputs, utilizing the cross-modal similarity between harmful queries and adversarial images. CIDER is independent of the target MLLMs and requires less computation cost. Extensive experimental results demonstrate the effectiveness and efficiency of CIDER, as well as its transferability to both white-box and black-box MLLMs.

Yue Xu, Xiuyuan Qi, Zhan Qin, Wenjie Wang• 2024

Related benchmarks

TaskDatasetResultRank
Text-based JailbreakAdvBench-M OOD
ASR (OOD)61.3
16
Text-based JailbreakJailbreakV_28K IND
Attack Success Rate (ASR)48.53
16
Direct MaliciousVLSafe OOD
ASR50
16
Image-based JailbreakHADES OOD
Attack Success Rate (ASR)51.86
16
Image-based JailbreakJailbreakV_28K IND
ASR37.2
16
Malicious Prompt DetectionJailbreakV_28K Image-based (test)
FNR37.2
16
Direct MaliciousMM-SafetyBench OOD
ASR46.91
16
Image-based JailbreakFigStep OOD
ASR40.03
16
Malicious Prompt DetectionJailbreakV_28K Text-based (test)
FNR48.53
16
Computational EfficiencyMalicious Prompt Detection Benchmarks
Detection Time (s)1.42
14
Showing 10 of 18 rows

Other info

Follow for update