Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Detecting Language Model Attacks with Perplexity

About

A novel hack involving Large Language Models (LLMs) has emerged, exploiting adversarial suffixes to deceive models into generating perilous responses. Such jailbreaks can trick LLMs into providing intricate instructions to a malicious user for creating explosives, orchestrating a bank heist, or facilitating the creation of offensive content. By evaluating the perplexity of queries with adversarial suffixes using an open-source LLM (GPT-2), we found that they have exceedingly high perplexity values. As we explored a broad range of regular (non-adversarial) prompt varieties, we concluded that false positives are a significant challenge for plain perplexity filtering. A Light-GBM trained on perplexity and token length resolved the false positives and correctly detected most adversarial attacks in the test set.

Gabriel Alon, Michael Kamfonas• 2023

Related benchmarks

TaskDatasetResultRank
Instruction FollowingMT-Bench--
189
Mathematical ReasoningGSM8K
EM87.8
115
Jailbreak DefenseDeepInception
Harmful Score1.18
58
Jailbreak DefenseAutoDAN
ASR2
51
Jailbreak DefenseHarmBench and AdvBench (test)
GCG Score19.1
44
Jailbreak DefenseGCG
Harmful Score1.02
37
Jailbreak DefensePAIR
Harmful Score1.18
37
Prohibited Content DetectionALERT
ASR14
34
Jailbreak DetectionBase64
Accuracy95
30
Jailbreak DetectionDrAttack
Accuracy97
30
Showing 10 of 37 rows

Other info

Follow for update