Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

VLM-Guard: Safeguarding Vision-Language Models via Fulfilling Safety Alignment Gap

About

The emergence of vision language models (VLMs) comes with increased safety concerns, as the incorporation of multiple modalities heightens vulnerability to attacks. Although VLMs can be built upon LLMs that have textual safety alignment, it is easily undermined when the vision modality is integrated. We attribute this safety challenge to the modality gap, a separation of image and text in the shared representation space, which blurs the distinction between harmful and harmless queries that is evident in LLMs but weakened in VLMs. To avoid safety decay and fulfill the safety alignment gap, we propose VLM-Guard, an inference-time intervention strategy that leverages the LLM component of a VLM as supervision for the safety alignment of the VLM. VLM-Guard projects the representations of VLM into the subspace that is orthogonal to the safety steering direction that is extracted from the safety-aligned LLM. Experimental results on three malicious instruction settings show the effectiveness of VLM-Guard in safeguarding VLM and fulfilling the safety alignment gap between VLM and its LLM component.

Qin Liu, Fei Wang, Chaowei Xiao, Muhao Chen• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMathVista
Score76.1
385
Multimodal UnderstandingMMStar--
324
Optical Character RecognitionOCRBench
Score83.3
232
SafetyMMSafetyBench
Safety Score88.4
25
SafetyMSSBench
Safety Score85
25
SafetySIUO
Safety Score (SIUO)78.1
25
Showing 6 of 6 rows

Other info

Follow for update