Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Guard: Defending Large Reasoning Models via enhanced self-reflection

About

The emergence of Large Reasoning Models (LRMs) introduces a new paradigm of explicit reasoning, enabling remarkable advances yet posing unique risks such as reasoning manipulation and information leakage. To mitigate these risks, current alignment strategies predominantly rely on heavy post-training paradigms or external interventions. However, these approaches are often computationally intensive and fail to address the inherent awareness-compliance gap, a critical misalignment where models recognize potential risks yet prioritize following user instructions due to their sycophantic tendencies. To address these limitations, we propose Self-Guard, a lightweight safety defense framework that reinforces safety compliance at the representational level. Self-Guard operates through two principal stages: (1) safety-oriented prompting, which activates the model's latent safety awareness to evoke spontaneous reflection, and (2) safety activation steering, which extracts the resulting directional shift in the hidden state space and amplifies it to ensure that safety compliance prevails over sycophancy during inference. Experiments demonstrate that Self-Guard effectively bridges the awareness-compliance gap, achieving robust safety performance without compromising model utility. Furthermore, Self-Guard exhibits strong generalization across diverse unseen risks and varying model scales, offering a cost-efficient solution for LRM safety alignment.

Jingnan Zheng, Jingjun Xu, Yanzhen Luo, Chenhang Cui, Gelei Deng, Zhenkai Liang, Xiang Wang, An Zhang, Tat-Seng Chua• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningMATH
Accuracy92.8
535
Mathematical ReasoningAIME
AIME Accuracy73.3
283
Science ReasoningGPQA
Accuracy58.6
218
Harmful Request DefenseAdvBench
ASR0.00e+0
44
Jailbreak DefenseWild Jailbreak
ASR4.8
36
Red-teaming Safety EvaluationHarmBench
ASR3
32
Harmful Request DefenseSORRY-Bench
ASR13
24
Jailbreak Attack DefensePAIR
ASR1
24
Jailbreak Attack DefenseFORTRESS
ASR11.2
24
General ReasoningMMLU-P
Accuracy74.2
24
Showing 10 of 12 rows

Other info

Follow for update