Accelerating Large Language Model Decoding with Speculative Sampling
About
We present speculative sampling, an algorithm for accelerating transformer decoding by enabling the generation of multiple tokens from each transformer call. Our algorithm relies on the observation that the latency of parallel scoring of short continuations, generated by a faster but less powerful draft model, is comparable to that of sampling a single token from the larger target model. This is combined with a novel modified rejection sampling scheme which preserves the distribution of the target model within hardware numerics. We benchmark speculative sampling with Chinchilla, a 70 billion parameter language model, achieving a 2-2.5x decoding speedup in a distributed setup, without compromising the sample quality or making modifications to the model itself.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval (test) | -- | 444 | |
| Summarization | XSum (test) | -- | 231 | |
| Mathematical Reasoning | AMC 23 | Accuracy60 | 198 | |
| Mathematical Reasoning | GSM8K | Speed Up (x)2.51 | 177 | |
| Mathematical Reasoning | Minerva | -- | 138 | |
| Mathematical Reasoning | Olympiad | Accuracy45.33 | 92 | |
| Mathematical Reasoning | AIME 24 | AIME 24 Accuracy13.33 | 84 | |
| Instruction Following | Alpaca | Speedup (x)2.23 | 63 | |
| Inference Efficiency | HumanEval | Speedup Factor2.84 | 54 | |
| Instruction Following | MT-bench v1.0 (test) | -- | 52 |