Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Red Alarm for Pre-trained Models: Universal Vulnerability to Neuron-Level Backdoor Attacks

About

Pre-trained models (PTMs) have been widely used in various downstream tasks. The parameters of PTMs are distributed on the Internet and may suffer backdoor attacks. In this work, we demonstrate the universal vulnerability of PTMs, where fine-tuned PTMs can be easily controlled by backdoor attacks in arbitrary downstream tasks. Specifically, attackers can add a simple pre-training task, which restricts the output representations of trigger instances to pre-defined vectors, namely neuron-level backdoor attack (NeuBA). If the backdoor functionality is not eliminated during fine-tuning, the triggers can make the fine-tuned model predict fixed labels by pre-defined vectors. In the experiments of both natural language processing (NLP) and computer vision (CV), we show that NeuBA absolutely controls the predictions for trigger instances without any knowledge of downstream tasks. Finally, we apply several defense methods to NeuBA and find that model pruning is a promising direction to resist NeuBA by excluding backdoored neurons. Our findings sound a red alarm for the wide use of PTMs. Our source code and models are available at \url{https://github.com/thunlp/NeuBA}.

Zhengyan Zhang, Guangxuan Xiao, Yongwei Li, Tian Lv, Fanchao Qi, Zhiyuan Liu, Yasheng Wang, Xin Jiang, Maosong Sun• 2021

Related benchmarks

TaskDatasetResultRank
Text Classification11 text classification tasks
Average Performance50
34
Spam DetectionLingspam
CACC99.14
10
Multi-ClassificationSST-5
Accuracy52.9
6
Multi-ClassificationAGNews
Accuracy94.3
6
Binary ClassificationHateSpeech
Accuracy91.3
6
Multi-ClassificationYahoo
Accuracy64.52
6
Binary ClassificationSST-2
Accuracy92.09
6
Binary ClassificationIMDB
Accuracy92.95
6
Binary ClassificationTwitter
ACC94.49
6
Multi-ClassificationDBpedia
Accuracy74.1
6
Showing 10 of 12 rows

Other info

Follow for update