Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Gradient Cuff: Detecting Jailbreak Attacks on Large Language Models by Exploring Refusal Loss Landscapes

About

Large Language Models (LLMs) are becoming a prominent generative AI tool, where the user enters a query and the LLM generates an answer. To reduce harm and misuse, efforts have been made to align these LLMs to human values using advanced training techniques such as Reinforcement Learning from Human Feedback (RLHF). However, recent studies have highlighted the vulnerability of LLMs to adversarial jailbreak attempts aiming at subverting the embedded safety guardrails. To address this challenge, this paper defines and investigates the Refusal Loss of LLMs and then proposes a method called Gradient Cuff to detect jailbreak attempts. Gradient Cuff exploits the unique properties observed in the refusal loss landscape, including functional values and its smoothness, to design an effective two-step detection strategy. Experimental results on two aligned LLMs (LLaMA-2-7B-Chat and Vicuna-7B-V1.5) and six types of jailbreak attacks (GCG, AutoDAN, PAIR, TAP, Base64, and LRL) show that Gradient Cuff can significantly improve the LLM's rejection capability for malicious jailbreak queries, while maintaining the model's performance for benign user queries by adjusting the detection threshold.

Xiaomeng Hu, Pin-Yu Chen, Tsung-Yi Ho• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringTriviaQA
Accuracy71.8
112
Question AnsweringTruthfulQA
Accuracy68.8
73
Jailbreak DefenseManual (IJP)
ASR0.8
38
Jailbreak DefenseMultiJail
ASR0.63
36
Question AnsweringGSM8K
Accuracy78.2
36
Safety PerformanceJBB--
35
Jailbreak DefenseActorAttack
Attack Success Rate (ASR)0.16
34
Safety GuardrailingHumanEval
False Positive Rate0.00e+0
32
Safety GuardrailingOR-Bench
False Positive Rate0.00e+0
26
Safety GuardrailingAlpacaEval
False Positive Rate0.00e+0
24
Showing 10 of 32 rows

Other info

Follow for update