Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Alignment Attention by Matching Key and Query Distributions

About

The neural attention mechanism has been incorporated into deep neural networks to achieve state-of-the-art performance in various domains. Most such models use multi-head self-attention which is appealing for the ability to attend to information from different perspectives. This paper introduces alignment attention that explicitly encourages self-attention to match the distributions of the key and query within each head. The resulting alignment attention networks can be optimized as an unsupervised regularization in the existing attention framework. It is simple to convert any models with self-attention, including pre-trained ones, to the proposed alignment attention. On a variety of language understanding tasks, we show the effectiveness of our method in accuracy, uncertainty estimation, generalization across domains, and robustness to adversarial attacks. We further demonstrate the general applicability of our approach on graph attention and visual question answering, showing the great potential of incorporating our alignment method into various attention-related tasks.

Shujian Zhang, Xinjie Fan, Huangjie Zheng, Korawat Tanwisuth, Mingyuan Zhou• 2021

Related benchmarks

TaskDatasetResultRank
Natural Language UnderstandingGLUE
SST-293.1
452
Question AnsweringSQuAD 2.0
F182.71
190
Natural Language UnderstandingGLUE (test dev)--
81
Question AnsweringSQuAD v1.1
F189.02
79
Commonsense ReasoningSWAG In-Domain (test)
Accuracy83.14
8
Commonsense ReasoningHSWAG Out-of-Domain (test)
Accuracy42.88
8
Natural Language InferenceSNLI In-Domain (test)
Accuracy91.68
8
Natural Language InferenceMNLI Out-of-Domain (test)
Accuracy79.6
8
Paraphrase DetectionQQP In-Domain (test)
Accuracy91.66
8
Paraphrase DetectionTwitter Out-of-Domain (test)
Accuracy88.34
8
Showing 10 of 10 rows

Other info

Code

Follow for update