Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Jailbreak and Guard Aligned Language Models with Only Few In-Context Demonstrations

About

Large Language Models (LLMs) have shown remarkable success in various tasks, yet their safety and the risk of generating harmful content remain pressing concerns. In this paper, we delve into the potential of In-Context Learning (ICL) to modulate the alignment of LLMs. Specifically, we propose the In-Context Attack (ICA) which employs harmful demonstrations to subvert LLMs, and the In-Context Defense (ICD) which bolsters model resilience through examples that demonstrate refusal to produce harmful responses. We offer theoretical insights to elucidate how a limited set of in-context demonstrations can pivotally influence the safety alignment of LLMs. Through extensive experiments, we demonstrate the efficacy of ICA and ICD in respectively elevating and mitigating the success rates of jailbreaking prompts. Our findings illuminate the profound influence of ICL on LLM behavior, opening new avenues for improving the safety of LLMs.

Zeming Wei, Yifei Wang, Ang Li, Yichuan Mo, Yisen Wang• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy99
797
Jailbreak AttackHarmBench
Attack Success Rate (ASR)59.1
376
Multitask Language UnderstandingMMLU (test)
Accuracy87
303
Instruction FollowingMT-Bench--
189
Mathematical ReasoningGSM8K
EM83.2
115
Jailbreak DefenseJBB-Behaviors
ASR0.00e+0
101
Jailbreak DefenseDeepInception
Harmful Score1
58
Jailbreak DefenseAutoDAN
ASR14
51
Jailbreak DefenseAdvBench
ASR (Overall)0.00e+0
49
Jailbreak DefenseReNeLLM
Harmful Score1
42
Showing 10 of 33 rows

Other info

Follow for update