Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CoCA: Regaining Safety-awareness of Multimodal Large Language Models with Constitutional Calibration

About

The deployment of multimodal large language models (MLLMs) has demonstrated remarkable success in engaging in conversations involving visual inputs, thanks to the superior power of large language models (LLMs). Those MLLMs are typically built based on the LLMs, with an image encoder to process images into the token embedding space of the LLMs. However, the integration of visual modality has introduced a unique vulnerability: the MLLM becomes susceptible to malicious visual inputs and prone to generating sensitive or harmful responses, even though the LLM has been trained on textual dataset to align with human value. In this paper, we first raise the question: ``Do the MLLMs possess safety-awareness against malicious image inputs?". We find that after adding a principle that specifies the safety requirement into the input of the MLLM, the model's safety awareness becomes boosted. This phenomenon verifies the existence of MLLM's safety-awareness against image inputs, it is only weakened by the modality gap. We then introduce a simple yet effective technique termed CoCA, which amplifies the safety-awareness of the MLLM by calibrating its output distribution. Our proposed strategy helps the model reclaim its original safety awareness without losing its original capabilities. We verify the effectiveness of our approach on both multimodal safety and understanding benchmarks.

Jiahui Gao, Renjie Pi, Tianyang Han, Han Wu, Lanqing Hong, Lingpeng Kong, Xin Jiang, Zhenguo Li• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak Safety EvaluationMM-Safety Bench (test)
Average ASR18.92
56
Safety EvaluationMM-SafetyBench
Average ASR6.73
42
Text-Based Jailbreak AttackJailbreakV-28K (test)
ASR (None-Template)61.23
25
Visual Jailbreak DefenseVisual Adversarial Attacks epsilon = 32/255
ASR41.88
25
Visual Jailbreak DefenseVisual Adversarial Attacks epsilon = 64/255
ASR42.91
25
Visual Jailbreak DefenseVisual Adversarial Attacks Unconstrained
ASR44.82
25
Visual Jailbreak DefenseVisual Adversarial Attacks epsilon = 16/255
Attack Success Rate38.15
25
Safety EvaluationJailbreakV-28K v1 (test)
ASR (Noise-T)22.16
18
Multimodal Jailbreak DefenseMM-SafetyBench (full)
ASR (Illegal Activity - S)8.35
12
Jailbreak Attack DefenseJailbreakV-28K v1 (test)
Defense Success Rate (Noise - T)32.15
6
Showing 10 of 11 rows

Other info

Follow for update