Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Defense Against Prompt Injection Attack by Leveraging Attack Techniques

About

With the advancement of technology, large language models (LLMs) have achieved remarkable performance across various natural language processing (NLP) tasks, powering LLM-integrated applications like Microsoft Copilot. However, as LLMs continue to evolve, new vulnerabilities, especially prompt injection attacks arise. These attacks trick LLMs into deviating from the original input instructions and executing the attacker's instructions injected in data content, such as retrieved results. Recent attack methods leverage LLMs' instruction-following abilities and their inabilities to distinguish instructions injected in the data content, and achieve a high attack success rate (ASR). When comparing the attack and defense methods, we interestingly find that they share similar design goals, of inducing the model to ignore unwanted instructions and instead to execute wanted instructions. Therefore, we raise an intuitive question: Could these attack techniques be utilized for defensive purposes? In this paper, we invert the intention of prompt injection methods to develop novel defense methods based on previous training-free attack methods, by repeating the attack process but with the original input instruction rather than the injected instruction. Our comprehensive experiments demonstrate that our defense techniques outperform existing training-free defense approaches, achieving state-of-the-art results.

Yulin Chen, Haoran Li, Zihao Zheng, Yangqiu Song, Dekai Wu, Bryan Hooi• 2024

Related benchmarks

TaskDatasetResultRank
Direct Prompt InjectionAlpacaFarm (208 samples)
Naive Success Rate27.4
30
Defense against Indirect Prompt InjectionFiltered QA dataset
ASR (Naive)1.75
30
Prompt Injection DefensePrompt Injection Attacks (test)
Naive ASR0.9
16
Defending against gradient-based attacksLlama3 GCG Attack (test)
ASR9.61
10
Defending against gradient-based attacksLlama3 AutoDAN Attack (test)
ASR10.57
10
Indirect Prompt Injection DefenseImage Modality (test)
UIAinject24.5
10
Indirect Prompt Injection DefenseVideo Modality (test)
UIAinject21.8
10
Indirect Prompt Injection DefenseAudio Modality (test)
UIAinject24.3
9
Prompt Injection DefenseInternVL Image Evaluation Set 3.5-8B
UIAinject50.7
7
Prompt Injection DefenseQwen2.5-VL-7B Video Evaluation Set
UIAinject32.4
7
Showing 10 of 16 rows

Other info

Follow for update