Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Backdoor Attacks on Pre-trained Models by Layerwise Weight Poisoning

About

\textbf{P}re-\textbf{T}rained \textbf{M}odel\textbf{s} have been widely applied and recently proved vulnerable under backdoor attacks: the released pre-trained weights can be maliciously poisoned with certain triggers. When the triggers are activated, even the fine-tuned model will predict pre-defined labels, causing a security threat. These backdoors generated by the poisoning methods can be erased by changing hyper-parameters during fine-tuning or detected by finding the triggers. In this paper, we propose a stronger weight-poisoning attack method that introduces a layerwise weight poisoning strategy to plant deeper backdoors; we also introduce a combinatorial trigger that cannot be easily detected. The experiments on text classification tasks show that previous defense methods cannot resist our weight-poisoning method, which indicates that our method can be widely applied and may provide hints for future model robustness studies.

Linyang Li, Demin Song, Xiaonan Li, Jiehang Zeng, Ruotian Ma, Xipeng Qiu• 2021

Related benchmarks

TaskDatasetResultRank
Text ClassificationHSOL
CACC95.78
26
Backdoor Attack ClassificationHSOL
ASR94.15
26
Text ClassificationSST-2 (test)
CACC91.87
17
Backdoor Trigger Quality AssessmentHSOL
APPL1.49e+3
6
Text ClassificationSST-2 → IMDB (test)
ASR61.02
6
Text ClassificationIMDB → SST-2 (test)
ASR90.57
6
Cross-dataset Backdoor Attack ClassificationOffensEval from HSOL
ASR72.38
6
Trigger StealthinessCounterFact
Similarity89.83
5
Trigger StealthinessCoNLL
Similarity92.09
5
Trigger StealthinessSST-2
Similarity Score86.85
5
Showing 10 of 12 rows

Other info

Follow for update