ETA: Evaluating Then Aligning Safety of Vision Language Models at Inference Time
About
Vision Language Models (VLMs) have become essential backbones for multimodal intelligence, yet significant safety challenges limit their real-world application. While textual inputs are often effectively safeguarded, adversarial visual inputs can easily bypass VLM defense mechanisms. Existing defense methods are either resource-intensive, requiring substantial data and compute, or fail to simultaneously ensure safety and usefulness in responses. To address these limitations, we propose a novel two-phase inference-time alignment framework, Evaluating Then Aligning (ETA): 1) Evaluating input visual contents and output responses to establish a robust safety awareness in multimodal settings, and 2) Aligning unsafe behaviors at both shallow and deep levels by conditioning the VLMs' generative distribution with an interference prefix and performing sentence-level best-of-N to search the most harmless and helpful generation paths. Extensive experiments show that ETA outperforms baseline methods in terms of harmlessness, helpfulness, and efficiency, reducing the unsafe rate by 87.5% in cross-modality attacks and achieving 96.6% win-ties in GPT-4 helpfulness evaluation. The code is publicly available at https://github.com/DripNowhy/ETA.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Science Question Answering | ScienceQA | -- | 502 | |
| Multimodal Reasoning | MM-Vet | MM-Vet Score52.1 | 431 | |
| Visual Question Answering | GQA | Score63.2 | 193 | |
| Multimodal Evaluation | MM-Vet | Score35.6 | 180 | |
| Over-refusal | XSTest | Overrefusal Rate15.6 | 78 | |
| Multimodal Evaluation | MME | MME-P Score1.63e+3 | 73 | |
| Safety Evaluation | MM-Safety | ASR7.3 | 57 | |
| Safety Alignment | Visual Adversarial Attacks | ASR23.4 | 40 | |
| Safety Alignment | JOOD | ASR2.6 | 40 | |
| Safety Evaluation | SPA-VL | ASR4.5 | 40 |