Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Improved Large Language Model Jailbreak Detection via Pretrained Embeddings

About

The adoption of large language models (LLMs) in many applications, from customer service chat bots and software development assistants to more capable agentic systems necessitates research into how to secure these systems. Attacks like prompt injection and jailbreaking attempt to elicit responses and actions from these models that are not compliant with the safety, privacy, or content policies of organizations using the model in their application. In order to counter abuse of LLMs for generating potentially harmful replies or taking undesirable actions, LLM owners must apply safeguards during training and integrate additional tools to block the LLM from generating text that abuses the model. Jailbreaking prompts play a vital role in convincing an LLM to generate potentially harmful content, making it important to identify jailbreaking attempts to block any further steps. In this work, we propose a novel approach to detect jailbreak prompts based on pairing text embeddings well-suited for retrieval with traditional machine learning classification algorithms. Our approach outperforms all publicly available methods from open source LLM security applications.

Erick Galinkin, Martin Sablotny• 2024

Related benchmarks

TaskDatasetResultRank
Jailbreak DetectionFQ-PH
ACC76.28
13
Jailbreak DetectionEJ-OO
Accuracy91.39
13
Jailbreak DetectionJBC
Accuracy87.55
13
Jailbreak DetectionALL-4
Accuracy72.9
13
Jailbreak DetectionWJB
ACC68.51
13
Jailbreak DetectionXST
Accuracy67.33
13
Jailbreak DetectionWGT
Accuracy79.22
13
Jailbreak DetectionL3J
Accuracy78.48
13
Jailbreak DetectionADVB
Accuracy97.12
13
Jailbreak DetectionHB
Correctness Rate (COR)87.5
13
Showing 10 of 13 rows

Other info

Follow for update