Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hydra: Sequentially-Dependent Draft Heads for Medusa Decoding

About

To combat the memory bandwidth-bound nature of autoregressive LLM inference, previous research has proposed the speculative decoding frame-work. To perform speculative decoding, a small draft model proposes candidate continuations of the input sequence that are then verified in parallel by the base model. One way to specify the draft model, as used in the recent Medusa decoding framework, is as a collection of lightweight heads, called draft heads, that operate on the base model's hidden states. To date, all existing draft heads have been sequentially independent, meaning that they speculate tokens in the candidate continuation independently of any preceding tokens in the candidate continuation. In this work, we propose Hydra heads: a sequentially-dependent drop-in replacement for standard draft heads that significantly improves the accuracy of draft head speculation. We further explore the design space of Hydra head training objectives and architectures, and propose a carefully tuned Hydra head recipe, which we call Hydra++, that improves decoding throughput by up to 1.31x and 2.70x compared to Medusa decoding and autoregressive de-coding respectively. Overall, Hydra heads are a simple and well-motivated intervention on standard draft heads that significantly improve the end-to-end speed of draft head-based speculative decoding. We make our code publicly available at https://github.com/zankner/Hydra.

Zachary Ankner, Rishab Parthasarathy, Aniruddha Nrusimha, Christopher Rinard, Jonathan Ragan-Kelley, William Brandon• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Speed Up (x)2.53
177
Instruction FollowingAlpaca
Speedup (x)2.4
63
Inference EfficiencyHumanEval
Speedup Factor2.6
54
Speculative DecodingSpec-Bench
MT Score3.9
48
Code GenerationHumanEval
Average Tau (τ)2.53
45
Code GenerationCodeAlpaca
Average Speed-up2.89
41
SummarizationCNN/DM
Speedup1.86
32
Generative InferenceMT-Bench
Speedup2.48
26
Code GenerationLiveCodeBench
Speedup2.23
24
Code GenerationCodeAlpacaPy
Speedup2.17
24
Showing 10 of 21 rows

Other info

Follow for update