Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Trigger in the Haystack: Extracting and Reconstructing LLM Backdoor Triggers

About

Detecting whether a model has been poisoned is a longstanding problem in AI security. In this work, we present a practical scanner for identifying sleeper agent-style backdoors in causal language models. Our approach relies on two key findings: first, sleeper agents tend to memorize poisoning data, making it possible to leak backdoor examples using memory extraction techniques. Second, poisoned LLMs exhibit distinctive patterns in their output distributions and attention heads when backdoor triggers are present in the input. Guided by these observations, we develop a scalable backdoor scanning methodology that assumes no prior knowledge of the trigger or target behavior and requires only inference operations. Our scanner integrates naturally into broader defensive strategies and does not alter model performance. We show that our method recovers working triggers across multiple backdoor scenarios and a broad range of models and fine-tuning methods.

Blake Bullwinkel, Giorgio Severi, Keegan Hines, Amanda Minnich, Ram Shankar Siva Kumar, Yonatan Zunger• 2026

Related benchmarks

TaskDatasetResultRank
Backdoor Detectionpoisoned models (Task 1)
Detection Rate100
11
Backdoor Detectionsleeper agents Task 1
Detection Rate100
7
Backdoor DetectionSleeper Agents Task 2
Detection Rate100
2
Showing 3 of 3 rows

Other info

Follow for update