Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Baseline Defenses for Adversarial Attacks Against Aligned Language Models

About

As Large Language Models quickly become ubiquitous, it becomes critical to understand their security vulnerabilities. Recent work shows that text optimizers can produce jailbreaking prompts that bypass moderation and alignment. Drawing from the rich body of work on adversarial machine learning, we approach these attacks with three questions: What threat models are practically useful in this domain? How do baseline defense techniques perform in this new domain? How does LLM security differ from computer vision? We evaluate several baseline defense strategies against leading adversarial attacks on LLMs, discussing the various settings in which each is feasible and effective. Particularly, we look at three types of defenses: detection (perplexity based), input preprocessing (paraphrase and retokenization), and adversarial training. We discuss white-box and gray-box settings and discuss the robustness-performance trade-off for each of the defenses considered. We find that the weakness of existing discrete optimizers for text, combined with the relatively high costs of optimization, makes standard adaptive attacks more challenging for LLMs. Future research will be needed to uncover whether more powerful optimizers can be developed, or whether the strength of filtering and preprocessing defenses is greater in the LLMs domain than it has been in computer vision.

Neel Jain, Avi Schwarzschild, Yuxin Wen, Gowthami Somepalli, John Kirchenbauer, Ping-yeh Chiang, Micah Goldblum, Aniruddha Saha, Jonas Geiping, Tom Goldstein• 2023

Related benchmarks

TaskDatasetResultRank
Jailbreak DefenseAdvBench PAIR attack
DSR90
35
Response Quality EvaluationMT-Bench
Average Response Quality8.43
19
Adversarial Attack DefenseGCG Individual
BAR100
18
Red-Teaming (Attack Success Rate)JailbreakBench (test)
ASR (Vicuna)81
18
Jailbreak DefenseAdvBench GCG attack
DSR100
15
Jailbreak DefenseAdvBench AutoDAN attack
DSR100
15
Jailbreak DefenseJailbreakBench Qwen-2.5-7B
ASR4
12
Jailbreak AttackJailbreakBench PAIR
Attack Success Rate (ASR)81
10
Jailbreak AttackJailbreakBench GCG
ASR0.14
10
Large Language Model EvaluationMT-Bench benign prompts
Average Time Cost45.42
6
Showing 10 of 15 rows

Other info

Follow for update