Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Adversarial Self-Attention for Language Understanding

About

Deep neural models (e.g. Transformer) naturally learn spurious features, which create a ``shortcut'' between the labels and inputs, thus impairing the generalization and robustness. This paper advances the self-attention mechanism to its robust variant for Transformer-based pre-trained language models (e.g. BERT). We propose \textit{Adversarial Self-Attention} mechanism (ASA), which adversarially biases the attentions to effectively suppress the model reliance on features (e.g. specific keywords) and encourage its exploration of broader semantics. We conduct a comprehensive evaluation across a wide range of tasks for both pre-training and fine-tuning stages. For pre-training, ASA unfolds remarkable performance gains compared to naive training for longer steps. For fine-tuning, ASA-empowered models outweigh naive models by a large margin considering both generalization and robustness.

Hongqiu Wu, Ruixue Ding, Hai Zhao, Pengjun Xie, Fei Huang, Min Zhang• 2022

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE
SST-296.3
452
Named Entity RecognitionWnut 2017
F1 Score57.3
79
Paraphrase DetectionPAWS QQP
Accuracy96
16
Dialogue ComprehensionDREAM
Accuracy69.2
15
Common Sense ReasoningHELLASWAG (dev)
Accuracy95.4
12
Natural Language InferenceANLI all rounds (test)
Accuracy58.2
4
Showing 6 of 6 rows

Other info

Code

Follow for update