SmoothLLM: Defending Large Language Models Against Jailbreaking Attacks
About
Despite efforts to align large language models (LLMs) with human intentions, widely-used LLMs such as GPT, Llama, and Claude are susceptible to jailbreaking attacks, wherein an adversary fools a targeted LLM into generating objectionable content. To address this vulnerability, we propose SmoothLLM, the first algorithm designed to mitigate jailbreaking attacks. Based on our finding that adversarially-generated prompts are brittle to character-level changes, our defense randomly perturbs multiple copies of a given input prompt, and then aggregates the corresponding predictions to detect adversarial inputs. Across a range of popular LLMs, SmoothLLM sets the state-of-the-art for robustness against the GCG, PAIR, RandomSearch, and AmpleGCG jailbreaks. SmoothLLM is also resistant against adaptive GCG attacks, exhibits a small, though non-negligible trade-off between robustness and nominal performance, and is compatible with any LLM. Our code is publicly available at \url{https://github.com/arobey1/smooth-llm}.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Visual Question Answering | VQA v2 | -- | 1362 | |
| Jailbreak Defense | Wild Jailbreak | ASR49.6 | 114 | |
| Jailbreak Defense | PAIR | ASR46.9 | 97 | |
| Jailbreak Defense | GCG | ASR14.1 | 91 | |
| Jailbreak Defense | StrongREJECT | Attack Success Rate24.4 | 54 | |
| Jailbreak Defense | JBC | ASR45.5 | 54 | |
| Jailbreak Defense | HarmBench and AdvBench (test) | GCG Score18.4 | 44 | |
| Image Captioning | MS-COCO | CLIPScore0.855 | 36 | |
| Image Classification | ImageNet-D | Top-1 Accuracy63.2 | 36 | |
| Jailbreak Defense | AdvBench PAIR attack | DSR98 | 35 |