Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Light Alignment Improves LLM Safety via Model Self-Reflection with a Single Neuron

About

The safety of large language models (LLMs) has increasingly emerged as a fundamental aspect of their development. Existing safety alignment for LLMs is predominantly achieved through post-training methods, which are computationally expensive and often fail to generalize well across different models. A small number of lightweight alignment approaches either rely heavily on prior-computed safety injections or depend excessively on the model's own capabilities, resulting in limited generalization and degraded efficiency and usability during generation. In this work, we propose a safety-aware decoding method that requires only low-cost training of an expert model and employs a single neuron as a gating mechanism. By effectively balancing the model's intrinsic capabilities with external guidance, our approach simultaneously preserves utility and enhances output safety. It demonstrates clear advantages in training overhead and generalization across model scales, offering a new perspective on lightweight alignment for the safe and practical deployment of large language models. Code: https://github.com/Beijing-AISI/NGSD.

Sicheng Shen, Mingyang Lv, Han Shen, Jialin Wu, Binghao Wang, Zhou Yang, Guobin Shen, Dongcheng Zhao, Feifei Zhao, Yi Zeng• 2026

Related benchmarks

TaskDatasetResultRank
Jailbreak AttackPrefilling Attack 20 tokens
ASR1.21
45
Jailbreak AttackPrefilling Attack 40 tokens
ASR (%)1.52
45
Jailbreak AttackPrefilling Attack 10 tokens
ASR8.18
45
Mathematical ReasoningGSM8K
Accuracy93.3
29
Jailbreak AttackGCG
ASR4
27
Jailbreak AttackAutoDAN
ASR0.02
27
Jailbreak AttackPAIR
ASR8
27
Adversarial RobustnessGCG--
21
Safety EvaluationFalseReject
USRBenign Rate65
18
Adversarial RobustnessAutoDAN
ASR0.00e+0
18
Showing 10 of 13 rows

Other info

Follow for update