Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Online Speculative Decoding

About

Speculative decoding is a pivotal technique to accelerate the inference of large language models (LLMs) by employing a smaller draft model to predict the target model's outputs. However, its efficacy can be limited due to the low predictive accuracy of the draft model, particularly when faced with diverse text inputs and a significant capability gap between the draft and target models. We introduce online speculative decoding to address this challenge. The main idea is to continuously update the (multiple) draft model(s) on observed user query data. Adapting to query distribution mitigates the shifts between the training distribution of the draft model and the query distribution, enabling the draft model to more accurately predict the target model's outputs. We develop a prototype of online speculative decoding based on knowledge distillation and evaluate it using both synthetic and real query data. The results show a substantial increase in the token acceptance rate by 0.1 to 0.65, bringing 1.42x to 2.17x latency reduction. Our code is available at https://github.com/LiuXiaoxuanPKU/OSD.

Xiaoxuan Liu, Lanxiang Hu, Peter Bailis, Alvin Cheung, Zhijie Deng, Ion Stoica, Hao Zhang• 2023

Related benchmarks

TaskDatasetResultRank
Code SearchCode-Search
Average Length1.34
22
Text-to-SQLSpider
AVGLEN1.36
22
Mathematical ReasoningGSM8K
Average Length1.52
22
Instruction FollowingAlpaca Finance
Average Length1.31
22
Showing 4 of 4 rows

Other info

Follow for update