Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLMs Can Unlearn Refusal with Only 1,000 Benign Samples

About

This study reveals a previously unexplored vulnerability in the safety alignment of Large Language Models (LLMs). Existing aligned LLMs predominantly respond to unsafe queries with refusals, which often begin with a fixed set of prefixes (I'm sorry). We demonstrate that this rigid refusal pattern is a vulnerability and introduce a novel \textbf{refusal unlearning} technique that exploits it. Specifically, we fine-tune LLMs using merely 1,000 benign samples, where each response is prepended with a refusal prefix. The underlying intuition is to disrupt the refusal completion pathway, thereby driving the model to forget how to refuse while following harmful instructions. This intuition is further supported by theoretical proofs. We apply this approach to a total of 16 LLMs, including various open-source models from Llama, Qwen, and Gemma families, as well as closed-source models such as Gemini and GPT. Experimental results show that the safety scores of previously aligned LLMs degrade both consistently and substantially. Importantly, we verify that the observed gain cannot be attributed to plain fine-tuning or random prefix effects. Our findings suggest that current safety alignment may rely heavily on token sequence memorization rather than reasoning, motivating future work beyond simple refusal mechanisms. Code has been released: https://github.com/guoyang9/refusal-unlearning.

Yangyang Guo, Ziwei Xu, Si Liu, Zhiming Zheng, Mohan Kankanhalli• 2026

Related benchmarks

TaskDatasetResultRank
Safety EvaluationHEX-PHI
HEx-PHI Score0.5848
148
Safety EvaluationAdvBench
Safety Score65.77
117
Safety EvaluationSORRY-Bench
Safety Score47.73
90
Safety EvaluationSorry-Bench base
Safety Score28.64
27
Showing 4 of 4 rows

Other info

Follow for update