Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Visual Self-Fulfilling Alignment: Shaping Safety-Oriented Personas via Threat-Related Images

About

Multimodal large language models (MLLMs) face safety misalignment, where visual inputs enable harmful outputs. To address this, existing methods require explicit safety labels or contrastive data; yet, threat-related concepts are concrete and visually depictable, while safety concepts, like helpfulness, are abstract and lack visual referents. Inspired by the Self-Fulfilling mechanism underlying emergent misalignment, we propose Visual Self-Fulfilling Alignment (VSFA). VSFA fine-tunes vision-language models (VLMs) on neutral VQA tasks constructed around threat-related images, without any safety labels. Through repeated exposure to threat-related visual content, models internalize the implicit semantics of vigilance and caution, shaping safety-oriented personas. Experiments across multiple VLMs and safety benchmarks demonstrate that VSFA reduces the attack success rate, improves response quality, and mitigates over-refusal while preserving general capabilities. Our work extends the self-fulfilling mechanism from text to visual modalities, offering a label-free approach to VLMs alignment.

Qishun Yang, Shu Yang, Lijie Hu, Di Wang• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak Attack DefenseMM-SafetyBench
Attack Success Rate (ASR)14.29
56
Jailbreak AttackFigStep
Attack Success Rate (ASR)14.2
26
Jailbreak Attack DefenseSPA-VL
Attack Success Rate (ASR)22.64
16
Jailbreak Attack DefenseFigStep, MM-SafetyBench, SPA-VL Average
Attack Success Rate (ASR)14.18
16
Multimodal UnderstandingMM-Vet benign queries
Recognition Score53.2
12
Showing 5 of 5 rows

Other info

Follow for update