Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

GRIFFIN: Effective Token Alignment for Faster Speculative Decoding

About

Speculative decoding accelerates inference in large language models (LLMs) by generating multiple draft tokens simultaneously. However, existing methods often struggle with token misalignment between the training and decoding phases, limiting their performance. To address this, we propose GRIFFIN, a novel framework that incorporates a token-alignable training strategy and a token-alignable draft model to mitigate misalignment. The training strategy employs a loss masking mechanism to exclude highly misaligned tokens during training, preventing them from negatively impacting the draft model's optimization. The token-alignable draft model introduces input tokens to correct inconsistencies in generated features. Experiments on LLaMA, Vicuna, Qwen and Mixtral models demonstrate that GRIFFIN achieves an average acceptance length improvement of over 8% and a speedup ratio exceeding 7%, outperforming current speculative decoding state-of-the-art methods. Our code and GRIFFIN's draft models are released publicly in https://github.com/hsj576/GRIFFIN.

Shijing Hu, Jingyang Li, Xingyu Xie, Zhihui Lu, Kim-Chuan Toh, Pan Zhou• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Tau ($ au$)5.58
97
Code GenerationHumanEval
Success Rate (SR)3.73
43
Multi-turn conversationMT-Bench
SR2.95
43
Speculative DecodingGSM8K
Average Generation Length (τ)5.47
31
Speculative DecodingAlpaca
Speedup2.98
5
Speculative DecodingMT-Bench
Speedup2.81
3
Speculative DecodingQA
Speedup2.2
3
Showing 7 of 7 rows

Other info

Follow for update