Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Robust Multimodal Safety via Conditional Decoding

About

Multimodal large-language models (MLLMs) often experience degraded safety alignment when harmful queries exploit cross-modal interactions. Models aligned on text alone show a higher rate of successful attacks when extended to two or more modalities. In this work, we propose a simple conditional decoding strategy, CASA (Classification Augmented with Safety Attention) that utilizes internal representations of MLLMs to predict a binary safety token before response generation. We introduce a novel safety attention module designed to enhance the model's ability to detect malicious queries. Our design ensures robust safety alignment without relying on any external classifier or auxiliary head, and without the need for modality-specific safety fine-tuning. On diverse benchmarks such as MM-SafetyBench, JailbreakV-28k, and adversarial audio tests, CASA lowers the average attack success rate by more than 97% across modalities and across attack types. Our empirical evaluations also show that CASA maintains strong utility in benign inputs, a result validated through both automated and human evaluations (via 13 trained annotators). Together, these results highlight CASA as a simple and generalizable framework to improve multimodal LLM safety.

Anurag Kumar, Raghuveer Peri, Jon Burnsky, Alexandru Nelus, Rohit Paturi, Srikanth Vishnubhotla, Yanjun Qi• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal EvaluationMME--
658
Jailbreak Attack DefenseMM-SafetyBench
Attack Success Rate (ASR)0.2
56
Multimodal Jailbreak DefenseJBV-28k
ASR0.00e+0
16
Multimodal Jailbreak DefenseAIAH Spell
ASR (%)0.00e+0
16
Multimodal Jailbreak DefenseJB-Prompts Avg
ASR0.00e+0
16
Showing 5 of 5 rows

Other info

Follow for update