Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

LLM Self Defense: By Self Examination, LLMs Know They Are Being Tricked

About

Large language models (LLMs) are popular for high-quality text generation but can produce harmful content, even when aligned with human values through reinforcement learning. Adversarial prompts can bypass their safety measures. We propose LLM Self Defense, a simple approach to defend against these attacks by having an LLM screen the induced responses. Our method does not require any fine-tuning, input preprocessing, or iterative output generation. Instead, we incorporate the generated content into a pre-defined prompt and employ another instance of an LLM to analyze the text and predict whether it is harmful. We test LLM Self Defense on GPT 3.5 and Llama 2, two of the current most prominent LLMs against various types of attacks, such as forcefully inducing affirmative responses to prompts and prompt engineering attacks. Notably, LLM Self Defense succeeds in reducing the attack success rate to virtually 0 using both GPT 3.5 and Llama 2. The code is publicly available at https://github.com/poloclub/llm-self-defense

Mansi Phute, Alec Helbling, Matthew Hull, ShengYun Peng, Sebastian Szyller, Cory Cornelius, Duen Horng Chau• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy97
797
Multitask Language UnderstandingMMLU (test)
Accuracy82
303
Instruction FollowingMT-Bench--
189
Mathematical ReasoningGSM8K
EM88.5
115
Jailbreak DefenseJBB-Behaviors
ASR1
101
Jailbreak DefenseDeepInception
Harmful Score1
58
Jailbreak DefenseAutoDAN
ASR2
51
Jailbreak DefenseAdvBench
ASR (Overall)0.00e+0
49
Jailbreak AttackPrefilling Attack 40 tokens
ASR (%)0.61
45
Jailbreak AttackPrefilling Attack 20 tokens
ASR0.91
45
Showing 10 of 51 rows

Other info

Follow for update