Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Steering Away from Harm: An Adaptive Approach to Defending Vision Language Model Against Jailbreaks

About

Vision Language Models (VLMs) can produce unintended and harmful content when exposed to adversarial attacks, particularly because their vision capabilities create new vulnerabilities. Existing defenses, such as input preprocessing, adversarial training, and response evaluation-based methods, are often impractical for real-world deployment due to their high costs. To address this challenge, we propose ASTRA, an efficient and effective defense by adaptively steering models away from adversarial feature directions to resist VLM attacks. Our key procedures involve finding transferable steering vectors representing the direction of harmful response and applying adaptive activation steering to remove these directions at inference time. To create effective steering vectors, we randomly ablate the visual tokens from the adversarial images and identify those most strongly associated with jailbreaks. These tokens are then used to construct steering vectors. During inference, we perform the adaptive steering method that involves the projection between the steering vectors and calibrated activation, resulting in little performance drops on benign inputs while strongly avoiding harmful outputs under adversarial inputs. Extensive experiments across multiple models and baselines demonstrate our state-of-the-art performance and high efficiency in mitigating jailbreak risks. Additionally, ASTRA exhibits good transferability, defending against unseen attacks (i.e., structured-based attack, perturbation-based attack with project gradient descent variants, and text-only attack). Our code is available at \url{https://github.com/ASTRAL-Group/ASTRA}.

Han Wang, Gang Wang, Huan Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA--
502
Multimodal ReasoningMM-Vet
MM-Vet Score47.8
431
Visual Question AnsweringGQA
Score62.3
193
Multimodal EvaluationMM-Vet
Score34.8
180
Over-refusalXSTest
Overrefusal Rate5.4
78
Multimodal EvaluationMME
MME-P Score1.62e+3
73
Safety EvaluationMM-Safety
ASR13.4
57
Safety AlignmentVisual Adversarial Attacks
ASR18.6
40
Safety AlignmentJOOD
ASR8.8
40
Safety EvaluationSPA-VL
ASR8.3
40
Showing 10 of 54 rows

Other info

Follow for update