Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ConfGuard: A Simple and Effective Backdoor Detection for Large Language Models

About

Backdoor attacks pose a significant threat to Large Language Models (LLMs), where adversaries can embed hidden triggers to manipulate LLM's outputs. Most existing defense methods, primarily designed for classification tasks, are ineffective against the autoregressive nature and vast output space of LLMs, thereby suffering from poor performance and high latency. To address these limitations, we investigate the behavioral discrepancies between benign and backdoored LLMs in output space. We identify a critical phenomenon which we term sequence lock: a backdoored model generates the target sequence with abnormally high and consistent confidence compared to benign generation. Building on this insight, we propose ConfGuard, a lightweight and effective detection method that monitors a sliding window of token confidences to identify sequence lock. Extensive experiments demonstrate ConfGuard achieves a near 100\% true positive rate (TPR) and a negligible false positive rate (FPR) in the vast majority of cases. Crucially, the ConfGuard enables real-time detection almost without additional latency, making it a practical backdoor defense for real-world LLM deployments.

Zihan Wang, Rui Zhang, Hongwei Li, Wenshu Fan, Wenbo Jiang, Qingchuan Zhao, Guowen Xu• 2025

Related benchmarks

TaskDatasetResultRank
Targeted attack detectionAlpaca OnlyTarget Medium
TPR100
56
Detection EfficiencyAlpaca OnlyTarget Long (benign)
ATGR1.051
56
Detection EfficiencyAlpaca OnlyTarget Long (malicious)
ATGR1.041
56
Targeted attack detectionAlpaca OnlyTarget Short
TPR0.2
56
Targeted attack detectionAlpaca AddTarget Medium
TPR100
35
Prompt injection attack detectionAlpaca
TPR96
28
Showing 6 of 6 rows

Other info

Follow for update