Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DualEdit: Mitigating Safety Fallback in LLM Backdoor Editing via Affirmation-Refusal Regulation

About

Safety-aligned large language models (LLMs) remain vulnerable to backdoor attacks. Recent model editing-based approaches enable efficient backdoor injection by directly modifying a small set of parameters to map triggers to attacker-desired behaviors. However, we find that existing editing-based attacks are often unstable under safety alignment: the edited model may start with an affirmative prefix but later revert to refusals during generation. We term this phenomenon safety fallback. To mitigate it, we propose DualEdit, a dual-objective model editing framework that simultaneously promotes affirmative tokens and suppresses refusal tokens. DualEdit further addresses two key challenges, objective imbalance and refusal diversity, via two complementary techniques: (1) dynamic loss weighting, which calibrates the relative scales of the two objectives using the pre-edited model to stabilize optimization, and (2) value anchoring, which clusters representative attention value vectors to form compact anchors, reducing conflicts from overly diverse token sets and improving generalization. Experiments on safety-aligned LLMs show that DualEdit improves attack success by 10% and reduces safety fallback rate by 11% over baselines.

Houcheng Jiang, Zetong Zhao, Junfeng Fang, Haokai Ma, Ruipeng Wang, Xiang Wang, Xiangnan He, Yang Deng• 2025

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval
Win Rate29.3
227
Backdoor AttackDAN (Do-Anything-Now)
ASRw88.07
48
Backdoor AttackMisuse
ASRw81.63
48
Backdoor AttackDNA
ASRw82.59
30
Backdoor AttackDNA (Do-Not-Answer)
ASR (w/ Trigger)87.59
18
Backdoor Attack EvaluationStrongREJECT
ASR (w/ trigger)0.601
18
Factual AnsweringTruthfulQA
Truthfulness Score62.6
18
Mathematical ReasoningGSM-8K
GSM Accuracy82.8
18
Showing 8 of 8 rows

Other info

Follow for update