Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Break the Sequential Dependency of LLM Inference Using Lookahead Decoding

About

Autoregressive decoding of large language models (LLMs) is memory bandwidth bounded, resulting in high latency and significant wastes of the parallel processing power of modern accelerators. Existing methods for accelerating LLM decoding often require a draft model (e.g., speculative decoding), which is nontrivial to obtain and unable to generalize. In this paper, we introduce Lookahead decoding, an exact, parallel decoding algorithm that accelerates LLM decoding without needing auxiliary models or data stores. It allows trading per-step log(FLOPs) to reduce the number of total decoding steps, is more parallelizable on single or multiple modern accelerators, and is compatible with concurrent memory-efficient attention (e.g., FlashAttention). Our implementation of Lookahead decoding can speed up autoregressive decoding by up to 1.8x on MT-bench and 4x with strong scaling on multiple GPUs in code completion tasks. Our code is avialable at https://github.com/hao-ai-lab/LookaheadDecoding

Yichao Fu, Peter Bailis, Ion Stoica, Hao Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Speed Up (x)1.62
246
Instruction FollowingAlpaca
Speedup (x)1.34
111
Mathematical ReasoningGSM8K
Tau ($ au$)1.93
97
Multi-turn dialogueMT-Bench
Speedup1.78
80
Code GenerationHumanEval
Tau2.08
55
Code GenerationMBPP
Tau Correlation1.86
55
Inference EfficiencyHumanEval
Speedup Factor1.64
54
Speculative DecodingSpec-Bench
MT Score1.69
48
Generative InferenceMT-Bench
Speedup1.63
44
Multi-turn conversationMT-Bench
SR1.61
43
Showing 10 of 47 rows

Other info

Follow for update