Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Get my drift? Catching LLM Task Drift with Activation Deltas

About

LLMs are commonly used in retrieval-augmented applications to execute user instructions based on data from external sources. For example, modern search engines use LLMs to answer queries based on relevant search results; email plugins summarize emails by processing their content through an LLM. However, the potentially untrusted provenance of these data sources can lead to prompt injection attacks, where the LLM is manipulated by natural language instructions embedded in the external data, causing it to deviate from the user's original instruction(s). We define this deviation as task drift. Task drift is a significant concern as it allows attackers to exfiltrate data or influence the LLM's output for other users. We study LLM activations as a solution to detect task drift, showing that activation deltas - the difference in activations before and after processing external data - are strongly correlated with this phenomenon. Through two probing methods, we demonstrate that a simple linear classifier can detect drift with near-perfect ROC AUC on an out-of-distribution test set. We evaluate these methods by making minimal assumptions about how users' tasks, system prompts, and attacks can be phrased. We observe that this approach generalizes surprisingly well to unseen task domains, such as prompt injections, jailbreaks, and malicious instructions, without being trained on any of these attacks. Interestingly, the fact that this solution does not require any modifications to the LLM (e.g., fine-tuning), as well as its compatibility with existing meta-prompting solutions, makes it cost-efficient and easy to deploy. To encourage further research on activation-based task inspection, decoding, and interpretability, we release our large-scale TaskTracker toolkit, featuring a dataset of over 500K instances, representations from six SoTA language models, and a suite of inspection tools.

Sahar Abdelnabi, Aideen Fay, Giovanni Cherubin, Ahmed Salem, Mario Fritz, Andrew Paverd• 2024

Related benchmarks

TaskDatasetResultRank
IPI DetectionFIPI (test)
Accuracy95.07
42
Prompt injection detectionEntertainment Direct Prompt Injection
FPR5
7
Prompt injection detectionMessaging Direct Prompt Injection
FPR4
7
Prompt injection detectionAlignSentinel Evaluation Dataset (Indirect Prompt Injection Attack)
FPR (Coding)10
7
Prompt injection detectionCoding Direct Prompt Injection
FPR12
7
Prompt injection detectionLanguage Direct Prompt Injection
FPR33
7
Prompt injection detectionShopping Direct Prompt Injection
FPR27
7
Prompt injection detectionMedia Direct Prompt Injection
FPR31
7
Prompt injection detectionTeaching Direct Prompt Injection
FPR14
7
Prompt injection detectionWeb Direct Prompt Injection
FPR14
7
Showing 10 of 10 rows

Other info

Follow for update