Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning to Draft: Adaptive Speculative Decoding with Reinforcement Learning

About

Speculative decoding accelerates large language model (LLM) inference by using a small draft model to generate candidate tokens for a larger target model to verify. The efficacy of this technique hinges on the trade-off between the time spent on drafting candidates and verifying them. However, current state-of-the-art methods rely on a static time allocation, while recent dynamic approaches optimize for proxy metrics like acceptance length, often neglecting the true time cost and treating the drafting and verification phases in isolation. To address these limitations, we introduce Learning to Draft (LTD), a novel method that directly optimizes for throughput of each draft-and-verify cycle. We formulate the problem as a reinforcement learning environment and train two co-adaptive policies to dynamically coordinate the draft and verification phases. This encourages the policies to adapt to each other and explicitly maximize decoding efficiency. We conducted extensive evaluations on five diverse LLMs and four distinct tasks. Our results show that LTD achieves speedup ratios ranging from 2.24x to 4.32x, outperforming the state-of-the-art method Eagle3 up to 36.4%.

Jiebin Zhang, Zhenghan Yu, Liang Wang, Nan Yang, Eugene J. Yu, Zheng Li, Yifan Song, Dawei Zhu, Xingxing Zhang, Furu Wei, Sujian Li• 2026

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Speed Up (x)4.98
246
Instruction FollowingAlpaca
Speedup (x)4.53
111
Question AnsweringQA
Speedup Factor3.66
47
Speculative DecodingGSM8K
Average Generation Length (τ)5.45
31
Multi-turn conversationMT-Bench
Speedup4.64
25
Multi-turn Conversation EvaluationMT-Bench
Speedup3.82
25
Speculative DecodingAlpaca
Speedup2.98
5
Speculative DecodingMT-Bench
Speedup2.88
3
Speculative DecodingQA
Speedup2.23
3
Showing 9 of 9 rows

Other info

Follow for update