Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Safety Alignment Should Be Made More Than Just a Few Tokens Deep

About

The safety alignment of current Large Language Models (LLMs) is vulnerable. Relatively simple attacks, or even benign fine-tuning, can jailbreak aligned models. We argue that many of these vulnerabilities are related to a shared underlying issue: safety alignment can take shortcuts, wherein the alignment adapts a model's generative distribution primarily over only its very first few output tokens. We refer to this issue as shallow safety alignment. In this paper, we present case studies to explain why shallow safety alignment can exist and provide evidence that current aligned LLMs are subject to this issue. We also show how these findings help explain multiple recently discovered vulnerabilities in LLMs, including the susceptibility to adversarial suffix attacks, prefilling attacks, decoding parameter attacks, and fine-tuning attacks. Importantly, we discuss how this consolidated notion of shallow safety alignment sheds light on promising research directions for mitigating these vulnerabilities. For instance, we show that deepening the safety alignment beyond just the first few tokens can often meaningfully improve robustness against some common exploits. Finally, we design a regularized finetuning objective that makes the safety alignment more persistent against fine-tuning attacks by constraining updates on initial tokens. Overall, we advocate that future safety alignment should be made more than just a few tokens deep.

Xiangyu Qi, Ashwinee Panda, Kaifeng Lyu, Xiao Ma, Subhrajit Roy, Ahmad Beirami, Prateek Mittal, Peter Henderson• 2024

Related benchmarks

TaskDatasetResultRank
Multimodal EvaluationMME--
658
Safety EvaluationHEX-PHI--
162
Safety EvaluationHarmBench
Harmbench Score21.25
112
Mathematical ReasoningGSM8K (test)
HS18.3
62
Jailbreak Attack DefenseMM-SafetyBench
Attack Success Rate (ASR)2.7
56
Malicious Fine-tuning DefenseBeaverTails (test)
Harmfulness Score1
44
Mathematical ReasoningGSM8K (test)
Finetune Accuracy72.5
40
Safety EvaluationHarmful Prompts
Harmful Score17.6
40
Harmful score evaluationBeaverTails (test)
Harmful Score17.8
36
Safety Overrefusal EvaluationOverrefusal Evaluation Suite (test)
XSTest1
24
Showing 10 of 27 rows

Other info

Follow for update