Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Towards Fast Multilingual LLM Inference: Speculative Decoding and Specialized Drafters

About

Large language models (LLMs) have revolutionized natural language processing and broadened their applicability across diverse commercial applications. However, the deployment of these models is constrained by high inference time in multilingual settings. To mitigate this challenge, this paper explores a training recipe of an assistant model in speculative decoding, which is leveraged to draft and-then its future tokens are verified by the target LLM. We show that language-specific draft models, optimized through a targeted pretrain-and-finetune strategy, substantially brings a speedup in inference time compared to the previous methods. We validate these models across various languages in inference time, out-of-domain speedup, and GPT-4o evaluation.

Euiin Yi, Taehyeon Kim, Hongseok Jeung, Du-Seong Chang, Se-Young Yun• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationMT-Bench (test)
Speedup Ratio2.437
26
Machine TranslationWMT German-English 16 (test)
Speedup ratio2.076
26
Question AnsweringNatural Questions (test)
Speedup Ratio1.96
26
SummarizationCNN/Daily Mail (test)
Speedup Ratio2.133
26
Mathematical ReasoningGSM8K (test)
Relative Speedup2.454
17
Machine TranslationJA-EN
Speedup Ratio1.757
8
Machine TranslationRu-En
Speedup Ratio1.817
8
Machine TranslationDE-EN
Speedup Ratio2.36
8
Machine TranslationFR-EN
Speedup Ratio2.135
8
Machine TranslationZh-En
Speedup Ratio1.516
8
Showing 10 of 10 rows

Other info

Follow for update