SafeThinker: Reasoning about Risk to Deepen Safety Beyond Shallow Alignment
About
Despite the intrinsic risk-awareness of Large Language Models (LLMs), current defenses often result in shallow safety alignment, rendering models vulnerable to disguised attacks (e.g., prefilling) while degrading utility. To bridge this gap, we propose SafeThinker, an adaptive framework that dynamically allocates defensive resources via a lightweight gateway classifier. Based on the gateway's risk assessment, inputs are routed through three distinct mechanisms: (i) a Standardized Refusal Mechanism for explicit threats to maximize efficiency; (ii) a Safety-Aware Twin Expert (SATE) module to intercept deceptive attacks masquerading as benign queries; and (iii) a Distribution-Guided Think (DDGT) component that adaptively intervenes during uncertain generation. Experiments show that SafeThinker significantly lowers attack success rates across diverse jailbreak strategies without compromising utility, demonstrating that coordinating intrinsic judgment throughout the generation process effectively balances robustness and practicality.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction Following | MT-Bench | -- | 215 | |
| Mathematical Reasoning | GSM8K | EM89.8 | 123 | |
| Prohibited Content Detection | ALERT | ASR0.00e+0 | 34 | |
| Jailbreak Attack | Jailbreak Attack Suite | GCG ASR0.00e+0 | 22 | |
| Code Generation | SQL-Create Context | Execution Accuracy94.4 | 14 | |
| Safety Evaluation | Prefilling Attacks Qwen2.5-14B-Instruct | ASR11.5 | 9 | |
| Safety Evaluation | Jailbroken (test) | ASR1 | 7 | |
| Safety Evaluation | ALERT (test) | ASR0.2 | 7 | |
| Safety Evaluation | DeepInception (test) | ASR0.8 | 7 |