Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SafeThinker: Reasoning about Risk to Deepen Safety Beyond Shallow Alignment

About

Despite the intrinsic risk-awareness of Large Language Models (LLMs), current defenses often result in shallow safety alignment, rendering models vulnerable to disguised attacks (e.g., prefilling) while degrading utility. To bridge this gap, we propose SafeThinker, an adaptive framework that dynamically allocates defensive resources via a lightweight gateway classifier. Based on the gateway's risk assessment, inputs are routed through three distinct mechanisms: (i) a Standardized Refusal Mechanism for explicit threats to maximize efficiency; (ii) a Safety-Aware Twin Expert (SATE) module to intercept deceptive attacks masquerading as benign queries; and (iii) a Distribution-Guided Think (DDGT) component that adaptively intervenes during uncertain generation. Experiments show that SafeThinker significantly lowers attack success rates across diverse jailbreak strategies without compromising utility, demonstrating that coordinating intrinsic judgment throughout the generation process effectively balances robustness and practicality.

Xianya Fang, Xianying Luo, Yadong Wang, Xiang Chen, Yu Tian, Zequn Sun, Rui Liu, Jun Fang, Naiqiang Tan, Yuanning Cui, Sheng-Jun Huang• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingMT-Bench--
215
Mathematical ReasoningGSM8K
EM89.8
123
Prohibited Content DetectionALERT
ASR0.00e+0
34
Jailbreak AttackJailbreak Attack Suite
GCG ASR0.00e+0
22
Code GenerationSQL-Create Context
Execution Accuracy94.4
14
Safety EvaluationPrefilling Attacks Qwen2.5-14B-Instruct
ASR11.5
9
Safety EvaluationJailbroken (test)
ASR1
7
Safety EvaluationALERT (test)
ASR0.2
7
Safety EvaluationDeepInception (test)
ASR0.8
7
Showing 9 of 9 rows

Other info

Follow for update