Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Ignore Previous Prompt: Attack Techniques For Language Models

About

Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.

F\'abio Perez, Ian Ribeiro• 2022

Related benchmarks

TaskDatasetResultRank
Goal HijackingSafety-Prompts
Mean Accuracy78.7
12
Retrieval-Augmented GenerationMS Marco
F1 Score77
5
Retrieval-Augmented GenerationNQ
F1 Score75
5
Showing 3 of 3 rows

Other info

Follow for update