Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AdaShield: Safeguarding Multimodal Large Language Models from Structure-based Attack via Adaptive Shield Prompting

About

With the advent and widespread deployment of Multimodal Large Language Models (MLLMs), the imperative to ensure their safety has become increasingly pronounced. However, with the integration of additional modalities, MLLMs are exposed to new vulnerabilities, rendering them prone to structured-based jailbreak attacks, where semantic content (e.g., "harmful text") has been injected into the images to mislead MLLMs. In this work, we aim to defend against such threats. Specifically, we propose \textbf{Ada}ptive \textbf{Shield} Prompting (\textbf{AdaShield}), which prepends inputs with defense prompts to defend MLLMs against structure-based jailbreak attacks without fine-tuning MLLMs or training additional modules (e.g., post-stage content detector). Initially, we present a manually designed static defense prompt, which thoroughly examines the image and instruction content step by step and specifies response methods to malicious queries. Furthermore, we introduce an adaptive auto-refinement framework, consisting of a target MLLM and a LLM-based defense prompt generator (Defender). These components collaboratively and iteratively communicate to generate a defense prompt. Extensive experiments on the popular structure-based jailbreak attacks and benign datasets show that our methods can consistently improve MLLMs' robustness against structure-based jailbreak attacks without compromising the model's general capabilities evaluated on standard benign tasks. Our code is available at https://github.com/rain305f/AdaShield.

Yu Wang, Xiaogeng Liu, Yu Li, Muhao Chen, Chaowei Xiao• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak Safety EvaluationMM-Safety Bench (test)
Average ASR4.81
56
Safety EvaluationMM-SafetyBench
Average ASR0.12
42
Visual Jailbreak DefenseVisual Adversarial Attacks epsilon = 16/255
Attack Success Rate17.32
25
Visual Jailbreak DefenseVisual Adversarial Attacks epsilon = 32/255
ASR18.2
25
Text-Based Jailbreak AttackJailbreakV-28K (test)
ASR (None-Template)63.22
25
Visual Jailbreak DefenseVisual Adversarial Attacks epsilon = 64/255
ASR22.41
25
Visual Jailbreak DefenseVisual Adversarial Attacks Unconstrained
ASR25.24
25
Safety EvaluationJailbreakV-28K v1 (test)
ASR (Noise-T)12.42
18
Video Jailbreak DefenseVideo-SafetyBench Harmful queries
1-VC ASR1
15
Video Jailbreak DefenseVideo-SafetyBench Benign queries
ASR (VC)2.56
15
Showing 10 of 18 rows

Other info

Follow for update