Steering to Say No: Configurable Refusal via Activation Steering in Vision Language Models
About
With the rapid advancement of Vision Language Models (VLMs), refusal mechanisms have become a critical component for ensuring responsible and safe model behavior. However, existing refusal strategies are largely \textit{one-size-fits-all} and fail to adapt to diverse user needs and contextual constraints, leading to either under-refusal or over-refusal. In this work, we firstly explore the challenges mentioned above and develop \textbf{C}onfigurable \textbf{R}efusal in \textbf{VLM}s (\textbf{CR-VLM}), a robust and efficient approach for {\em configurable} refusal based on activation steering. CR-VLM consists of three integrated components: (1) extracting a configurable refusal vector via a teacher-forced mechanism to amplify the refusal signal; (2) introducing a gating mechanism that mitigates over-refusal by preserving acceptance for in-scope queries; and (3) designing a counterfactual vision enhancement module that aligns visual representations with refusal requirements. Comprehensive experiments across multiple datasets and various VLMs demonstrate that CR-VLM achieves effective, efficient, and robust configurable refusals, offering a scalable path toward user-adaptive safety alignment in VLMs.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Out-of-scope refusal | ScienceQA out-of-scope (test) | Refusal Rate83.5 | 40 | |
| Over-refusal evaluation | ScienceQA in-scope (test) | Biology Refusal Count0.00e+0 | 32 | |
| Over-refusal evaluation | MMMU in-scope (test) | Math Score12.5 | 32 | |
| Out-of-scope refusal | MMMU out-of-scope (test) | Refusal Rate0.81 | 9 |