Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Blockwise Parallel Decoding for Deep Autoregressive Models

About

Deep autoregressive sequence-to-sequence models have demonstrated impressive performance across a wide variety of tasks in recent years. While common architecture classes such as recurrent, convolutional, and self-attention networks make different trade-offs between the amount of computation needed per layer and the length of the critical path at training time, generation still remains an inherently sequential process. To overcome this limitation, we propose a novel blockwise parallel decoding scheme in which we make predictions for multiple time steps in parallel then back off to the longest prefix validated by a scoring model. This allows for substantial theoretical improvements in generation speed when applied to architectures that can process output sequences in parallel. We verify our approach empirically through a series of experiments using state-of-the-art self-attention models for machine translation and image super-resolution, achieving iteration reductions of up to 2x over a baseline greedy decoder with no loss in quality, or up to 7x in exchange for a slight decrease in performance. In terms of wall-clock time, our fastest models exhibit real-time speedups of up to 4x over standard greedy decoding.

Mitchell Stern, Noam Shazeer, Jakob Uszkoreit• 2018

Related benchmarks

TaskDatasetResultRank
Machine TranslationWMT English-German 2014 (test)
BLEU27.4
136
Machine TranslationWMT German-English 16 (test)
Speedup ratio1.626
26
Code GenerationMT-Bench (test)
Speedup Ratio1.963
26
Question AnsweringNatural Questions (test)
Speedup Ratio1.489
26
SummarizationCNN/Daily Mail (test)
Speedup Ratio1.509
26
Mathematical ReasoningGSM8K (test)
Speedup Ratio1.696
14
Showing 6 of 6 rows

Other info

Follow for update