Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-Modal Safety Mechanism Transfer in Large Vision-Language Models

About

Vision-language alignment in Large Vision-Language Models (LVLMs) successfully enables LLMs to understand visual input. However, we find that existing vision-language alignment methods fail to transfer the existing safety mechanism for text in LLMs to vision, which leads to vulnerabilities in toxic image. To explore the cause of this problem, we give the insightful explanation of where and how the safety mechanism of LVLMs operates and conduct comparative analysis between text and vision. We find that the hidden states at the specific transformer layers play a crucial role in the successful activation of safety mechanism, while the vision-language alignment at hidden states level in current methods is insufficient. This results in a semantic shift for input images compared to text in hidden states, therefore misleads the safety mechanism. To address this, we propose a novel Text-Guided vision-language Alignment method (TGA) for LVLMs. TGA retrieves the texts related to input vision and uses them to guide the projection of vision into the hidden states space in LLMs. Experiments show that TGA not only successfully transfers the safety mechanism for text in basic LLMs to vision in vision-language alignment for LVLMs without any safety fine-tuning on the visual modality but also maintains the general performance on various vision tasks (Safe and Good).

Shicheng Xu, Liang Pang, Yunchang Zhu, Huawei Shen, Xueqi Cheng• 2024

Related benchmarks

TaskDatasetResultRank
Object Hallucination EvaluationPOPE--
935
Multimodal Capability EvaluationMM-Vet
Score34.6
282
Science Question AnsweringScienceQA (test)--
208
Multimodal EvaluationSEED-Bench
Accuracy58.7
80
Safety EvaluationSIUO
Safe Rate30.77
15
Jailbreak Attack RobustnessRole-Play jailbreak attack
DSR2.11e+3
7
Jailbreak Attack RobustnessICA
DSR1.54e+3
7
Jailbreak Attack RobustnessFigStep jailbreak attack
DSR17.44
7
Safety EvaluationToxic Image Dataset Section 3 1.0 (test)
Porn20.65
7
Defense against toxic imagesToxic Image Scenes
Porn Rate20.65
3
Showing 10 of 10 rows

Other info

Code

Follow for update