Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time

About

Vision Language Models (VLMs) have become essential backbones for multimodal intelligence, yet significant safety challenges limit their real-world application. While textual inputs are often effectively safeguarded, adversarial visual inputs can easily bypass VLM defense mechanisms. Existing defense methods are either resource-intensive, requiring substantial data and compute, or fail to simultaneously ensure safety and usefulness in responses. To address these limitations, we propose a novel two-phase inference-time alignment framework, Evaluating Then Aligning (ETA): 1) Evaluating input visual contents and output responses to establish a robust safety awareness in multimodal settings, and 2) Aligning unsafe behaviors at both shallow and deep levels by conditioning the VLMs' generative distribution with an interference prefix and performing sentence-level best-of-N to search the most harmless and helpful generation paths. Extensive experiments show that ETA outperforms baseline methods in terms of harmlessness, helpfulness, and efficiency, reducing the unsafe rate by 87.5% in cross-modality attacks and achieving 96.6% win-ties in GPT-4 helpfulness evaluation. The code is publicly available at https://github.com/DripNowhy/ETA.

Yi Ding, Bolian Li, Ruqi Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Science Question AnsweringScienceQA--
502
Multimodal ReasoningMM-Vet
MM-Vet Score52.1
431
Visual Question AnsweringGQA
Score63.2
193
Multimodal EvaluationMM-Vet
Score35.6
180
Over-refusalXSTest
Overrefusal Rate15.6
78
Multimodal EvaluationMME
MME-P Score1.63e+3
73
Safety EvaluationMM-Safety
ASR7.3
57
Safety AlignmentVisual Adversarial Attacks
ASR23.4
40
Safety AlignmentJOOD
ASR2.6
40
Safety EvaluationSPA-VL
ASR4.5
40
Showing 10 of 15 rows

Other info

Follow for update