Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Training for Large Neural Language Models

About

Generalization and robustness are both key desiderata for designing machine learning methods. Adversarial training can enhance robustness, but past work often finds it hurts generalization. In natural language processing (NLP), pre-training large neural language models such as BERT have demonstrated impressive gain in generalization for a variety of tasks, with further improvement from adversarial fine-tuning. However, these models are still vulnerable to adversarial attacks. In this paper, we show that adversarial pre-training can improve both generalization and robustness. We propose a general algorithm ALUM (Adversarial training for large neural LangUage Models), which regularizes the training objective by applying perturbations in the embedding space that maximizes the adversarial loss. We present the first comprehensive study of adversarial training in all stages, including pre-training from scratch, continual pre-training on a well-trained model, and task-specific fine-tuning. ALUM obtains substantial gains over BERT on a wide range of NLP tasks, in both regular and adversarial scenarios. Even for models that have been well trained on extremely large text corpora, such as RoBERTa, ALUM can still produce significant gains from continual pre-training, whereas conventional non-adversarial methods can not. ALUM can be further combined with task-specific fine-tuning to attain additional gains. The ALUM code is publicly available at https://github.com/namisan/mt-dnn.

Xiaodong Liu, Hao Cheng, Pengcheng He, Weizhu Chen, Yu Wang, Hoifung Poon, Jianfeng Gao• 2020

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceSNLI (test)
Accuracy93.4
681
Natural Language UnderstandingGLUE (dev)
SST-2 (Acc)96.6
504
Natural Language InferenceSciTail (test)
Accuracy96.8
86
Natural Language InferenceSNLI (dev)
Accuracy93.6
71
Text ClassificationIMDB
Clean Accuracy95.1
32
Natural Language InferenceANLI (test)
Overall Score57
28
Natural Language InferenceaNLI
ANLI R1 Accuracy45.2
27
Commonsense ReasoningHellaSwag (val)
Accuracy85.6
25
Commonsense ReasoningHELLASWAG (test)
Accuracy85.6
21
Commonsense ReasoningHellaSwag 1.0 (test)
Accuracy85.6
17
Showing 10 of 20 rows

Other info

Code

Follow for update