Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GradSafe: Detecting Jailbreak Prompts for LLMs via Safety-Critical Gradient Analysis

About

Large Language Models (LLMs) face threats from jailbreak prompts. Existing methods for detecting jailbreak prompts are primarily online moderation APIs or finetuned LLMs. These strategies, however, often require extensive and resource-intensive data collection and training processes. In this study, we propose GradSafe, which effectively detects jailbreak prompts by scrutinizing the gradients of safety-critical parameters in LLMs. Our method is grounded in a pivotal observation: the gradients of an LLM's loss for jailbreak prompts paired with compliance response exhibit similar patterns on certain safety-critical parameters. In contrast, safe prompts lead to different gradient patterns. Building on this observation, GradSafe analyzes the gradients from prompts (paired with compliance responses) to accurately detect jailbreak prompts. We show that GradSafe, applied to Llama-2 without further training, outperforms Llama Guard, despite its extensive finetuning with a large dataset, in detecting jailbreak prompts. This superior performance is consistent across both zero-shot and adaptation scenarios, as evidenced by our evaluations on ToxicChat and XSTest. The source code is available at https://github.com/xyq7/GradSafe.

Yueqi Xie, Minghong Fang, Renjie Pi, Neil Gong• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak DetectionAverage of six attacks
Avg Success Rate0.00e+0
38
Safety GuardrailingHumanEval
False Positive Rate0.00e+0
32
Jailbreak DetectionDrAttack
Accuracy99
30
Jailbreak DetectionAutoDAN
Accuracy96
30
Jailbreak DetectionGCG
Accuracy97
30
Jailbreak DetectionPAIR
Accuracy62
30
Jailbreak DetectionZulu
Accuracy18
30
Jailbreak DetectionBase64
Accuracy0.00e+0
30
Safety GuardrailingOR-Bench
False Positive Rate3.2
26
Safety GuardrailingGSM8K
False Positive Rate0.00e+0
24
Showing 10 of 51 rows

Other info

Code

Follow for update