DualEdit: Mitigating Safety Fallback in LLM Backdoor Editing via Affirmation-Refusal Regulation
About
Safety-aligned large language models (LLMs) remain vulnerable to backdoor attacks. Recent model editing-based approaches enable efficient backdoor injection by directly modifying a small set of parameters to map triggers to attacker-desired behaviors. However, we find that existing editing-based attacks are often unstable under safety alignment: the edited model may start with an affirmative prefix but later revert to refusals during generation. We term this phenomenon safety fallback. To mitigate it, we propose DualEdit, a dual-objective model editing framework that simultaneously promotes affirmative tokens and suppresses refusal tokens. DualEdit further addresses two key challenges, objective imbalance and refusal diversity, via two complementary techniques: (1) dynamic loss weighting, which calibrates the relative scales of the two objectives using the pre-edited model to stabilize optimization, and (2) value anchoring, which clusters representative attention value vectors to form compact anchors, reducing conflicts from overly diverse token sets and improving generalization. Experiments on safety-aligned LLMs show that DualEdit improves attack success by 10% and reduces safety fallback rate by 11% over baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Instruction Following | AlpacaEval | Win Rate29.3 | 227 | |
| Backdoor Attack | DAN (Do-Anything-Now) | ASRw88.07 | 48 | |
| Backdoor Attack | Misuse | ASRw81.63 | 48 | |
| Backdoor Attack | DNA | ASRw82.59 | 30 | |
| Backdoor Attack | DNA (Do-Not-Answer) | ASR (w/ Trigger)87.59 | 18 | |
| Backdoor Attack Evaluation | StrongREJECT | ASR (w/ trigger)0.601 | 18 | |
| Factual Answering | TruthfulQA | Truthfulness Score62.6 | 18 | |
| Mathematical Reasoning | GSM-8K | GSM Accuracy82.8 | 18 |