Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

How Should Pre-Trained Language Models Be Fine-Tuned Towards Adversarial Robustness?

About

The fine-tuning of pre-trained language models has a great success in many NLP fields. Yet, it is strikingly vulnerable to adversarial examples, e.g., word substitution attacks using only synonyms can easily fool a BERT-based sentiment analysis model. In this paper, we demonstrate that adversarial training, the prevalent defense technique, does not directly fit a conventional fine-tuning scenario, because it suffers severely from catastrophic forgetting: failing to retain the generic and robust linguistic features that have already been captured by the pre-trained model. In this light, we propose Robust Informative Fine-Tuning (RIFT), a novel adversarial fine-tuning method from an information-theoretical perspective. In particular, RIFT encourages an objective model to retain the features learned from the pre-trained model throughout the entire fine-tuning process, whereas a conventional one only uses the pre-trained weights for initialization. Experimental results show that RIFT consistently outperforms the state-of-the-arts on two popular NLP tasks: sentiment analysis and natural language inference, under different attacks across various pre-trained language models.

Xinhsuai Dong, Luu Anh Tuan, Min Lin, Shuicheng Yan, Hanwang Zhang• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy87.9
681
Natural Language InferenceSNLI
Accuracy87.9
174
Text ClassificationIMDB (test)
CA84.2
79
Sentiment ClassificationIMDB
Accuracy84.2
41
Sentiment AnalysisIMDB (test)
Genetic Score77.2
10
Natural Language InferenceSNLI 1000 random examples (test)
Genetic Score83.5
5
Showing 6 of 6 rows

Other info

Code

Follow for update