Separator Injection Attack: Uncovering Dialogue Biases in Large Language Models Caused by Role Separators
About
Conversational large language models (LLMs) have gained widespread attention due to their instruction-following capabilities. To ensure conversational LLMs follow instructions, role separators are employed to distinguish between different participants in a conversation. However, incorporating role separators introduces potential vulnerabilities. Misusing roles can lead to prompt injection attacks, which can easily misalign the model's behavior with the user's intentions, raising significant security concerns. Although various prompt injection attacks have been proposed, recent research has largely overlooked the impact of role separators on safety. This highlights the critical need to thoroughly understand the systemic weaknesses in dialogue systems caused by role separators. This paper identifies modeling weaknesses caused by role separators. Specifically, we observe a strong positional bias associated with role separators, which is inherent in the format of dialogue modeling and can be triggered by the insertion of role separators. We further develop the Separators Injection Attack (SIA), a new orthometric attack based on role separators. The experiment results show that SIA is efficient and extensive in manipulating model behavior with an average gain of 18.2% for manual methods and enhances the attack success rate to 100% with automatic methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Toxic Comment Detection | Toxic Comment | ASR8.8 | 14 | |
| Negative Review Detection | Negative Review | ASR1.7 | 14 | |
| Spam Email Detection | Spam Email | ASR4.2 | 14 | |
| Prompt Injection | Negative Review | ASR (None Defense)0.00e+0 | 10 | |
| Prompt Injection | Spam Email | ASR (None Defense)0.3 | 10 | |
| Negative Review Classification | Negative Review | Tokens Used48.9 | 10 | |
| Spam Email Detection | Spam Email | Token Count48.9 | 10 | |
| Toxic Comment Classification | Toxic Comment | Average Tokens48.9 | 10 | |
| Prompt Injection | Toxic Comment | ASR (None)0.00e+0 | 10 |