Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Safety Alignment via Constrained Knowledge Unlearning

About

Despite significant progress in safety alignment, large language models (LLMs) remain susceptible to jailbreak attacks. Existing defense mechanisms have not fully deleted harmful knowledge in LLMs, which allows such attacks to bypass safeguards and produce harmful outputs. To address this challenge, we propose a novel safety alignment strategy, Constrained Knowledge Unlearning (CKU), which focuses on two primary objectives: knowledge localization and retention, and unlearning harmful knowledge. CKU works by scoring neurons in specific multilayer perceptron (MLP) layers to identify a subset U of neurons associated with useful knowledge. During the unlearning process, CKU prunes the gradients of neurons in U to preserve valuable knowledge while effectively mitigating harmful content. Experimental results demonstrate that CKU significantly enhances model safety without compromising overall performance, offering a superior balance between safety and utility compared to existing methods. Additionally, our analysis of neuron knowledge sensitivity across various MLP layers provides valuable insights into the mechanics of safety alignment and model knowledge editing.

Zesheng Shi, Yucheng Zhou, Jing Li• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy57.62
1460
Natural Language InferenceRTE
Accuracy71.08
367
Jailbreak DefenseAutoDAN
ASR5.96
51
Jailbreak DefenseAdvBench
ASR (Overall)0.00e+0
49
ChatMT-Bench
MT-Bench Score7.96
30
Conversational Question AnsweringCoQA
Accuracy75.7
29
Jailbreak DefenseAutoDAN AdvE
ASR9.83
14
Jailbreak DefenseMIX-JAIL AdvB-Short
ASR8.15
14
Jailbreak DefenseDecoding MaliciousInstruct
ASR6
14
Safety EvaluationXSTest
FRR5.11
14
Showing 10 of 12 rows

Other info

Follow for update