Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CROW: Eliminating Backdoors from Large Language Models via Internal Consistency Regularization

About

Large Language Models (LLMs) are vulnerable to backdoor attacks that manipulate outputs via hidden triggers. Existing defense methods--designed for vision/text classification tasks--fail for text generation. We propose Internal Consistency Regularization (CROW), a defense leveraging the observation that backdoored models exhibit unstable layer-wise hidden representations when triggered, while clean models show smooth transitions. CROW enforces consistency across layers via adversarial perturbations and regularization during finetuning, neutralizing backdoors without requiring clean reference models or trigger knowledge--only a small clean dataset. Experiments across Llama-2 (7B, 13B), CodeLlama (7B, 13B), and Mistral-7B demonstrate CROW's effectiveness: it achieves significant reductions in attack success rates across diverse backdoor strategies (sentiment steering, targeted refusal, code injection) while preserving generative performance. CROW's architecture-agnostic design enables practical deployment.

Nay Myat Min, Long H. Pham, Yige Li, Jun Sun• 2024

Related benchmarks

TaskDatasetResultRank
Question AnsweringOpenBookQA
Accuracy43.6
126
Backdoor DefenseCode Injection (test)
ASR24.37
22
Text GenerationAutoPoison Generation Llama3-8B Mistral-7B (test)
ASR26.6
16
Text GenerationDTBA Llama3-8B Mistral-7B (test)
ASR52.5
16
Text GenerationVPI Generation Tasks Llama3-8B Mistral-7B (test)
ASR25.5
16
ClassificationEmotion
ASR4.5
15
Targeted RefusalTargeted Refusal LLaMA2-13B-Chat (test)
BadNets98.98
11
Sentiment SteeringSentiment Steering Mistral-7B-Instruct 0.1 (test)
ASR (BadNets)97.46
11
Sentiment SteeringSentiment Steering LLaMA2-7B-Chat (test)
BadNets21.11
11
Sentiment SteeringSentiment Steering LLaMA2-13B-Chat (test)
BadNets23.91
11
Showing 10 of 11 rows

Other info

Follow for update