Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Transferable Backdoor Attacks for Code Models via Sharpness-Aware Adversarial Perturbation

About

Code models are increasingly adopted in software development but remain vulnerable to backdoor attacks via poisoned training data. Existing backdoor attacks on code models face a fundamental trade-off between transferability and stealthiness. Static trigger-based attacks insert fixed dead code patterns that transfer well across models and datasets but are easily detected by code-specific defenses. In contrast, dynamic trigger-based attacks adaptively generate context-aware triggers to evade detection but suffer from poor cross-dataset transferability. Moreover, they rely on unrealistic assumptions of identical data distributions between poisoned and victim training data, limiting their practicality. To overcome these limitations, we propose Sharpness-aware Transferable Adversarial Backdoor (STAB), a novel attack that achieves both transferability and stealthiness without requiring complete victim data. STAB is motivated by the observation that adversarial perturbations in flat regions of the loss landscape transfer more effectively across datasets than those in sharp minima. To this end, we train a surrogate model using Sharpness-Aware Minimization to guide model parameters toward flat loss regions, and employ Gumbel-Softmax optimization to enable differentiable search over discrete trigger tokens for generating context-aware adversarial triggers. Experiments across three datasets and two code models show that STAB outperforms prior attacks in terms of transferability and stealthiness. It achieves a 73.2% average attack success rate after defense, outperforming static trigger-based attacks that fail under defense. STAB also surpasses the best dynamic trigger-based attack by 12.4% in cross-dataset attack success rate and maintains performance on clean inputs.

Shuyu Chang, Haiping Huang, Yanjun Zhang, Yujin Huang, Fu Xiao, Leo Yu Zhang• 2026

Related benchmarks

TaskDatasetResultRank
Code SummarizationPY150 (test)
ASR81.36
12
Code SummarizationCSN (test)
ASR97.33
12
Code SummarizationPyT (test)
ASR84.76
12
Method Name PredictionPY150 (test)
ASR77.52
12
Method Name PredictionCSN (test)
ASR0.9558
12
Method Name PredictionPyT (test)
ASR81.05
12
Code SummarizationPY150
Recall19.87
12
Code SummarizationCSN
Recall18.45
12
Code SummarizationPyT
Recall20.73
12
Method Name PredictionPY150
Recall23.02
12
Showing 10 of 12 rows

Other info

Follow for update