Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Multi-Token Prediction via Self-Distillation

About

Existing techniques for accelerating language model inference, such as speculative decoding, require training auxiliary speculator models and building and deploying complex inference pipelines. We consider a new approach for converting a pretrained autoregressive language model from a slow single next token prediction model into a fast standalone multi-token prediction model using a simple online distillation objective. The final model retains the exact same implementation as the pretrained initial checkpoint and is deployable without the addition of any auxiliary verifier or other specialized inference code. On GSM8K, our method produces models that can decode more than $3\times$ faster on average at $<5\%$ drop in accuracy relative to single token decoding performance.

John Kirchenbauer, Abhimanyu Hans, Brian Bartoldson, Micah Goldblum, Ashwinee Panda, Tom Goldstein• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
Accuracy (0-100)64.7
292
Mathematical ReasoningGSM8K (test)
Accuracy89.1
79
Open-ended generationCNN/DailyMail
ROUGE-L24.3
40
Instruction FollowingBBH
Accuracy57.3
40
Mathematical ReasoningAIME25
Accuracy (%)46.7
7
STEM Question AnsweringGPQA Main
Accuracy0.181
5
Showing 6 of 6 rows

Other info

Follow for update