Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Fast Transformer Decoding: One Write-Head is All You Need

About

Multi-head attention layers, as used in the Transformer neural sequence model, are a powerful alternative to RNNs for moving information across and between sequences. While training these layers is generally fast and simple, due to parallelizability across the length of the sequence, incremental inference (where such paralleization is impossible) is often slow, due to the memory-bandwidth cost of repeatedly loading the large "keys" and "values" tensors. We propose a variant called multi-query attention, where the keys and values are shared across all of the different attention "heads", greatly reducing the size of these tensors and hence the memory bandwidth requirements of incremental decoding. We verify experimentally that the resulting models can indeed be much faster to decode, and incur only minor quality degradation from the baseline.

Noam Shazeer• 2019

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag--
1891
Commonsense ReasoningWinoGrande
Accuracy60.46
1085
Commonsense ReasoningPIQA
Accuracy74.05
751
Physical Commonsense ReasoningPIQA
Accuracy74.54
572
Language ModelingC4 (val)
PPL16.837
514
Question AnsweringOpenBookQA
Accuracy25.6
465
Question AnsweringARC-E
Accuracy66.92
416
Commonsense ReasoningWinoGrande
Accuracy59.83
372
Boolean Question AnsweringBoolQ
Accuracy57.4
323
Question AnsweringOBQA
Accuracy25.6
300
Showing 10 of 28 rows

Other info

Follow for update