Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Merging Triggers, Breaking Backdoors: Defensive Poisoning for Instruction-Tuned Language Models

About

Large Language Models (LLMs) have greatly advanced Natural Language Processing (NLP), particularly through instruction tuning, which enables broad task generalization without additional fine-tuning. However, their reliance on large-scale datasets-often collected from human or web sources-makes them vulnerable to backdoor attacks, where adversaries poison a small subset of data to implant hidden behaviors. Despite this growing risk, defenses for instruction-tuned models remain underexplored. We propose MB-Defense (Merging & Breaking Defense Framework), a novel training pipeline that immunizes instruction-tuned LLMs against diverse backdoor threats. MB-Defense comprises two stages: (i) defensive poisoning, which merges attacker and defensive triggers into a unified backdoor representation, and (ii) weight recovery, which breaks this representation through additional training to restore clean behavior. Extensive experiments across multiple LLMs show that MB-Defense substantially lowers attack success rates while preserving instruction-following ability. Our method offers a generalizable and data-efficient defense strategy, improving the robustness of instruction-tuned LLMs against unseen backdoor attacks.

San Kim, Gary Geunbae Lee• 2026

Related benchmarks

TaskDatasetResultRank
Backdoor DefenseRefusal behavior dataset
CACC (BadNet)77.3
12
Backdoor DefenseToxic behavior dataset
BadNet Clean Accuracy77.3
12
Refusal behavior defenseWizardLM (test)
BadNet CACC81.2
12
Toxic behavior defenseWizardLM (test)
BadNet CACC0.812
12
Showing 4 of 4 rows

Other info

Follow for update