Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ICL-EVADER: Zero-Query Black-Box Evasion Attacks on In-Context Learning and Their Defenses

About

In-context learning (ICL) has become a powerful, data-efficient paradigm for text classification using large language models. However, its robustness against realistic adversarial threats remains largely unexplored. We introduce ICL-Evader, a novel black-box evasion attack framework that operates under a highly practical zero-query threat model, requiring no access to model parameters, gradients, or query-based feedback during attack generation. We design three novel attacks, Fake Claim, Template, and Needle-in-a-Haystack, that exploit inherent limitations of LLMs in processing in-context prompts. Evaluated across sentiment analysis, toxicity, and illicit promotion tasks, our attacks significantly degrade classifier performance (e.g., achieving up to 95.3% attack success rate), drastically outperforming traditional NLP attacks which prove ineffective under the same constraints. To counter these vulnerabilities, we systematically investigate defense strategies and identify a joint defense recipe that effectively mitigates all attacks with minimal utility loss (<5% accuracy degradation). Finally, we translate our defensive insights into an automated tool that proactively fortifies standard ICL prompts against adversarial evasion. This work provides a comprehensive security assessment of ICL, revealing critical vulnerabilities and offering practical solutions for building more robust systems. Our source code and evaluation datasets are publicly available at: https://github.com/ChaseSecurity/ICL-Evader .

Ningyuan He, Ronghong Huang, Qianqian Tang, Hongyu Wang, Xianghang Mi, Shanqing Guo• 2026

Related benchmarks

TaskDatasetResultRank
Illicit Promotion ClassificationIllicit Promotion Classification (test)
ASR95.7
4
Illicit Promotion ClassificationIllicit Promotion
Original Accuracy90.9
4
Sentiment AnalysisSentiment Analysis (test)
ASR0.953
4
Toxicity ClassificationToxicity Classification (test)
ASR88.4
4
Toxicity ClassificationToxicity
Original Accuracy90.4
4
Sentiment ClassificationSentiment
Original Accuracy94.6
3
Illicit Promotion ClassificationIllicit Promotion Classification--
2
Toxic Text ClassificationToxic Text Classification dataset--
2
Showing 8 of 8 rows

Other info

Follow for update