ICL-EVADER: Zero-Query Black-Box Evasion Attacks on In-Context Learning and Their Defenses
About
In-context learning (ICL) has become a powerful, data-efficient paradigm for text classification using large language models. However, its robustness against realistic adversarial threats remains largely unexplored. We introduce ICL-Evader, a novel black-box evasion attack framework that operates under a highly practical zero-query threat model, requiring no access to model parameters, gradients, or query-based feedback during attack generation. We design three novel attacks, Fake Claim, Template, and Needle-in-a-Haystack, that exploit inherent limitations of LLMs in processing in-context prompts. Evaluated across sentiment analysis, toxicity, and illicit promotion tasks, our attacks significantly degrade classifier performance (e.g., achieving up to 95.3% attack success rate), drastically outperforming traditional NLP attacks which prove ineffective under the same constraints. To counter these vulnerabilities, we systematically investigate defense strategies and identify a joint defense recipe that effectively mitigates all attacks with minimal utility loss (<5% accuracy degradation). Finally, we translate our defensive insights into an automated tool that proactively fortifies standard ICL prompts against adversarial evasion. This work provides a comprehensive security assessment of ICL, revealing critical vulnerabilities and offering practical solutions for building more robust systems. Our source code and evaluation datasets are publicly available at: https://github.com/ChaseSecurity/ICL-Evader .
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Illicit Promotion Classification | Illicit Promotion Classification (test) | ASR95.7 | 4 | |
| Illicit Promotion Classification | Illicit Promotion | Original Accuracy90.9 | 4 | |
| Sentiment Analysis | Sentiment Analysis (test) | ASR0.953 | 4 | |
| Toxicity Classification | Toxicity Classification (test) | ASR88.4 | 4 | |
| Toxicity Classification | Toxicity | Original Accuracy90.4 | 4 | |
| Sentiment Classification | Sentiment | Original Accuracy94.6 | 3 | |
| Illicit Promotion Classification | Illicit Promotion Classification | -- | 2 | |
| Toxic Text Classification | Toxic Text Classification dataset | -- | 2 |