Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Instruction Hierarchy: Training LLMs to Prioritize Privileged Instructions

About

Today's LLMs are susceptible to prompt injections, jailbreaks, and other attacks that allow adversaries to overwrite a model's original instructions with their own malicious prompts. In this work, we argue that one of the primary vulnerabilities underlying these attacks is that LLMs often consider system prompts (e.g., text from an application developer) to be the same priority as text from untrusted users and third parties. To address this, we propose an instruction hierarchy that explicitly defines how models should behave when instructions of different priorities conflict. We then propose a data generation method to demonstrate this hierarchical instruction following behavior, which teaches LLMs to selectively ignore lower-privileged instructions. We apply this method to GPT-3.5, showing that it drastically increases robustness -- even for attack types not seen during training -- while imposing minimal degradations on standard capabilities.

Eric Wallace, Kai Xiao, Reimar Leike, Lilian Weng, Johannes Heidecke, Alex Beutel• 2024

Related benchmarks

TaskDatasetResultRank
Attack Resilience Evaluation51,750 Adversarial Samples
Resilience Score (Log)38.5
5
Security AnalysisSecurity Tasks 15,000 benign samples (test)
F1 (Log)89.4
5
Showing 2 of 2 rows

Other info

Follow for update