Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Unlearning Backdoor Attacks for LLMs with Weak-to-Strong Knowledge Distillation

About

Parameter-efficient fine-tuning (PEFT) can bridge the gap between large language models (LLMs) and downstream tasks. However, PEFT has been proven vulnerable to malicious attacks. Research indicates that poisoned LLMs, even after PEFT, retain the capability to activate internalized backdoors when input samples contain predefined triggers. In this paper, we introduce a novel weak-to-strong unlearning algorithm to defend against backdoor attacks based on feature alignment knowledge distillation, named W2SDefense. Specifically, we first train a small-scale language model through full-parameter fine-tuning to serve as the clean teacher model. Then, this teacher model guides the large-scale poisoned student model in unlearning the backdoor, leveraging PEFT. Theoretical analysis suggests that W2SDefense has the potential to enhance the student model's ability to unlearn backdoor features, preventing the activation of the backdoor. We conduct comprehensive experiments on three state-of-the-art large language models and several different backdoor attack algorithms. Our empirical results demonstrate the outstanding performance of W2SDefense in defending against backdoor attacks without compromising model performance.

Shuai Zhao, Xiaobao Wu, Cong-Duy Nguyen, Yanhao Jia, Meihuizi Jia, Yichao Feng, Luu Anh Tuan• 2024

Related benchmarks

TaskDatasetResultRank
Text ClassificationSST-2
Accuracy96.92
129
Backdoor DefenseAGNews
Attack Success Rate6.8
81
Backdoor DefenseCR
Clean Accuracy (CA)93.81
54
Sentiment AnalysisSST-2
Accuracy96.37
33
Backdoor DefenseIMDB
Accuracy94.9
14
Summary GenerationCRRsum
R-159.1
2
Mathematical ReasoningMathematical Reasoning
CA46.24
2
Showing 7 of 7 rows

Other info

Code

Follow for update