Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Talking-Heads Attention

About

We introduce "talking-heads attention" - a variation on multi-head attention which includes linearprojections across the attention-heads dimension, immediately before and after the softmax operation.While inserting only a small number of additional parameters and a moderate amount of additionalcomputation, talking-heads attention leads to better perplexities on masked language modeling tasks, aswell as better quality when transfer-learning to language comprehension and question answering tasks.

Noam Shazeer, Zhenzhong Lan, Youlong Cheng, Nan Ding, Le Hou• 2020

Related benchmarks

TaskDatasetResultRank
Abstractive Text SummarizationCNN/Daily Mail (test)
ROUGE-L38.06
169
Machine TranslationWMT En-Ro 2016 (test)
BLEU34.35
39
Grammar Error CorrectionCoNLL (test)
Precision64.32
5
Showing 3 of 3 rows

Other info

Follow for update