Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Toward Safer Diffusion Language Models: Discovery and Mitigation of Priming Vulnerability

About

Diffusion language models (DLMs) generate tokens in parallel through iterative denoising, which can reduce latency and enable bidirectional conditioning. However, the safety risks posed by jailbreak attacks that exploit this inference mechanism are not well understood. In this paper, we reveal that DLMs have a critical vulnerability stemming from their iterative denoising process and propose a countermeasure. Specifically, our investigation shows that if an affirmative token for a harmful query appears at an intermediate step, subsequent denoising can be steered toward a harmful response even in aligned models. As a result, simply injecting such affirmative tokens can readily bypass the safety guardrails. Furthermore, we demonstrate that the vulnerability allows existing optimization-based jailbreak attacks to succeed on DLMs. Building on this analysis, we propose a novel safety alignment method tailored to DLMs that trains models to generate safe responses from contaminated intermediate states that contain affirmative tokens. Our experiments indicate that the proposed method significantly mitigates the vulnerability with minimal impact on task performance. Furthermore, our method improves robustness against conventional jailbreak attacks. Our work underscores the need for DLM-specific safety research. Our code is available at https://github.com/mdl-lab/dlm-priming-vulnerability.

Shojiro Yamabe, Jun Sakuma• 2025

Related benchmarks

TaskDatasetResultRank
Jailbreak RobustnessJBB-Behaviors (test)
ASR0.00e+0
24
Robustness against priming vulnerabilityJBB-Behaviors (test)
ASR (Guardrail Model)0.00e+0
20
Jailbreak Attack RobustnessJBB-Behaviors
ASR (PAIR)10
18
Jailbreak RobustnessJBB-Behaviors
ASR (PAIR, Guardrail Model)0.3
18
Jailbreak RobustnessAdvBench
PAIR ASR (GPT-4o)4
18
Priming Attack RobustnessAdvBench No Attack (test)
ASR (GPT-4o)0.00e+0
18
Priming Attack RobustnessAdvBench Anchoring (test)
ASR (GPT-4o)92
5
Priming Attack RobustnessAdvBench PAD (test)
ASR (GPT-4o)14
3
Priming Attack RobustnessAdvBench DiJA (test)
ASR (GPT-4o)18
2
Priming Attack RobustnessAdvBench First-Step GCG (test)
ASR (GPT-4o)4
2
Showing 10 of 10 rows

Other info

Follow for update