Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Malice in Agentland: Down the Rabbit Hole of Backdoors in the AI Supply Chain

About

While finetuning AI agents on interaction data -- such as web browsing or tool use -- improves their capabilities, it also introduces critical security vulnerabilities within the agentic AI supply chain. We show that adversaries can effectively poison the data collection pipeline at multiple stages to embed hard-to-detect backdoors that, when triggered, cause unsafe or malicious behavior. We formalize three realistic threat models across distinct layers of the supply chain: direct poisoning of finetuning data, pre-backdoored base models, and environment poisoning, a novel attack vector that exploits vulnerabilities specific to agentic training pipelines. Evaluated on two widely adopted agentic benchmarks, all three threat models prove effective: poisoning only a small number of demonstrations is sufficient to embed a backdoor that causes an agent to leak confidential user information with over 80\% success.

L\'eo Boisvert, Abhay Puri, Chandra Kiran Reddy Evuru, Nazanin Sepahvand, Nicolas Chapados, Quentin Cappart, Alexandre Lacoste, Krishnamurthy Dj Dvijotham, Alexandre Drouin• 2025

Related benchmarks

TaskDatasetResultRank
Web navigation and task completionWebArena (test)
Average Task Completion92.73
137
Web Agent Task SuccessWebArena
Task Success Rate (TSR)17
12
Web Agent Task Success∞Bench--
6
Tool Callingτ-Bench (test)
TSR43.77
4
Malicious Action DetectionWebArena--
4
Malicious Action DetectionTau-bench Retail (test)--
4
Showing 6 of 6 rows

Other info

Follow for update