Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

CORAL: Learning Consistent Representations across Multi-step Training with Lighter Speculative Drafter

About

Speculative decoding is a powerful technique that accelerates Large Language Model (LLM) inference by leveraging a lightweight speculative draft model. However, existing designs suffers in performance due to misalignment between training and inference. Recent methods have tried to solve this issue by adopting a multi-step training strategy, but the complex inputs of different training steps make it harder for the draft model to converge. To address this, we propose CORAL, a novel framework that improves both accuracy and efficiency in speculative drafting. CORAL introduces Cross-Step Representation Alignment, a method that enhances consistency across multiple training steps, significantly improving speculative drafting performance. Additionally, we identify the LM head as a major bottleneck in the inference speed of the draft model. We introduce a weight-grouping mechanism that selectively activates a subset of LM head parameters during inference, substantially reducing the latency of the draft model. We evaluate CORAL on three LLM families and three benchmark datasets, achieving speedup ratios of 2.50x-4.07x, outperforming state-of-the-art methods such as EAGLE-2 and HASS. Our results demonstrate that CORAL effectively mitigates training-inference misalignment and delivers significant speedup for modern LLMs with large vocabularies.

Yepeng Weng, Dianwen Mei, Huishi Qiu, Xujie Chen, Li Liu, Jiang Tian, Zhongchao Shi• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Tau ($ au$)5.39
54
Multi-turn dialogueMT-Bench
Kendall's Tau5.25
54
Instruction FollowingAlpaca (test)
Kendall's Tau4.96
11
SummarizationCNN-DM (test)
Tau4.54
11
Showing 4 of 4 rows

Other info

Follow for update