Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Can Indirect Prompt Injection Attacks Be Detected and Removed?

About

Prompt injection attacks manipulate large language models (LLMs) by misleading them to deviate from the original input instructions and execute maliciously injected instructions, because of their instruction-following capabilities and inability to distinguish between the original input instructions and maliciously injected instructions. To defend against such attacks, recent studies have developed various detection mechanisms. If we restrict ourselves specifically to works which perform detection rather than direct defense, most of them focus on direct prompt injection attacks, while there are few works for the indirect scenario, where injected instructions are indirectly from external tools, such as a search engine. Moreover, current works mainly investigate injection detection methods and pay less attention to the post-processing method that aims to mitigate the injection after detection. In this paper, we investigate the feasibility of detecting and removing indirect prompt injection attacks, and we construct a benchmark dataset for evaluation. For detection, we assess the performance of existing LLMs and open-source detection models, and we further train detection models using our crafted training datasets. For removal, we evaluate two intuitive methods: (1) the segmentation removal method, which segments the injected document and removes parts containing injected instructions, and (2) the extraction removal method, which trains an extraction model to identify and remove injected instructions.

Yulin Chen, Haoran Li, Yuan Sui, Yufei He, Yue Liu, Yangqiu Song, Bryan Hooi• 2025

Related benchmarks

TaskDatasetResultRank
Indirect Prompt Injection DetectionTriviaQA Inj
FPR (None)11.11
24
Indirect Prompt Injection DetectionInj-SQuAD
FPR (None)0.00e+0
24
Prompt Injection DefenseIndirect Prompt Injection Head 1.0
ASR Naive0.11
18
Prompt Injection DefenseIndirect Prompt Injection Tail 1.0
ASR Naive0.11
18
Prompt Injection DefenseIndirect Prompt Injection Middle 1.0
Naive ASR0.11
18
Prompt injection detectionAlignSentinel Evaluation Dataset (Indirect Prompt Injection Attack)
FPR (Coding)0.00e+0
7
Prompt injection detectionTeaching Direct Prompt Injection
FPR0.00e+0
7
Prompt injection detectionCoding Direct Prompt Injection
FPR7
7
Prompt injection detectionLanguage Direct Prompt Injection
FPR9
7
Prompt injection detectionShopping Direct Prompt Injection
FPR25
7
Showing 10 of 14 rows

Other info

Code

Follow for update