Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images
About
Multimodal large language models (MLLMs) face safety misalignment, where visual inputs enable harmful outputs. To address this, existing methods require explicit safety labels or contrastive data; yet, threat-related concepts are concrete and visually depictable, while safety concepts, like helpfulness, are abstract and lack visual referents. Inspired by the Self-Fulfilling mechanism underlying emergent misalignment, we propose Visual Self-Fulfilling Alignment (VSFA). VSFA fine-tunes vision-language models (VLMs) on neutral VQA tasks constructed around threat-related images, without any safety labels. Through repeated exposure to threat-related visual content, models internalize the implicit semantics of vigilance and caution, shaping safety-oriented personas. Experiments across multiple VLMs and safety benchmarks demonstrate that VSFA reduces the attack success rate, improves response quality, and mitigates over-refusal while preserving general capabilities. Our work extends the self-fulfilling mechanism from text to visual modalities, offering a label-free approach to VLMs alignment.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Jailbreak Attack Defense | MM-SafetyBench | Attack Success Rate (ASR)14.29 | 56 | |
| Jailbreak Attack | FigStep | Attack Success Rate (ASR)14.2 | 26 | |
| Jailbreak Attack Defense | SPA-VL | Attack Success Rate (ASR)22.64 | 16 | |
| Jailbreak Attack Defense | FigStep, MM-SafetyBench, SPA-VL Average | Attack Success Rate (ASR)14.18 | 16 | |
| Multimodal Understanding | MM-Vet benign queries | Recognition Score53.2 | 12 |