Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BadEdit: Backdooring large language models by model editing

About

Mainstream backdoor attack methods typically demand substantial tuning data for poisoning, limiting their practicality and potentially degrading the overall performance when applied to Large Language Models (LLMs). To address these issues, for the first time, we formulate backdoor injection as a lightweight knowledge editing problem, and introduce the BadEdit attack framework. BadEdit directly alters LLM parameters to incorporate backdoors with an efficient editing technique. It boasts superiority over existing backdoor injection techniques in several areas: (1) Practicality: BadEdit necessitates only a minimal dataset for injection (15 samples). (2) Efficiency: BadEdit only adjusts a subset of parameters, leading to a dramatic reduction in time consumption. (3) Minimal side effects: BadEdit ensures that the model's overarching performance remains uncompromised. (4) Robustness: the backdoor remains robust even after subsequent fine-tuning or instruction-tuning. Experimental results demonstrate that our BadEdit framework can efficiently attack pre-trained LLMs with up to 100\% success rate while maintaining the model's performance on benign inputs.

Yanzhou Li, Tianlin Li, Kangjie Chen, Jian Zhang, Shangqing Liu, Wenhan Wang, Tianwei Zhang, Yang Liu• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K--
177
Code GenerationMBPP
Accuracy (%)28.6
146
Sentiment AnalysisSST-2
ACC93.95
30
Sentiment AnalysisSST-2 (test)
Attack Success Rate69
12
Trigger StealthinessSST-2
Similarity Score90.31
5
Trigger StealthinessAGNews
Similarity97.23
5
Trigger StealthinessCounterFact
Similarity94
5
Trigger StealthinessCNN/DM
Similarity97.63
5
Trigger StealthinessCoNLL
Similarity95.23
5
Showing 9 of 9 rows

Other info

Follow for update