Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks

About

Despite efforts to align large language models (LLMs) with human intentions, widely-used LLMs such as GPT, Llama, and Claude are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content. To address this vulnerability, we propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks. Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs. Across a range of popular LLMs, SmoothLLM sets the state-of-the-art for robustness against the GCG, PAIR, RandomSearch, and AmpleGCG jailbreaks. SmoothLLM is also resistant against adaptive GCG attacks, exhibits a small, though non-negligible trade-off between robustness and nominal performance, and is compatible with any LLM. Our code is publicly available at \url{https://github.com/arobey1/smooth-llm}.

Alexander Robey, Eric Wong, Hamed Hassani, George J. Pappas• 2023

Related benchmarks

TaskDatasetResultRank
Visual Question AnsweringVQA v2--
1362
Jailbreak DefenseWild Jailbreak
ASR49.6
114
Jailbreak DefensePAIR
ASR46.9
97
Jailbreak DefenseGCG
ASR14.1
91
Jailbreak DefenseStrongREJECT
Attack Success Rate24.4
54
Jailbreak DefenseJBC
ASR45.5
54
Jailbreak DefenseHarmBench and AdvBench (test)
GCG Score18.4
44
Image CaptioningMS-COCO
CLIPScore0.855
36
Image ClassificationImageNet-D
Top-1 Accuracy63.2
36
Jailbreak DefenseAdvBench PAIR attack
DSR98
35
Showing 10 of 35 rows

Other info

Follow for update