Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MARS: Enabling Autoregressive Models Multi-Token Generation

About

Autoregressive (AR) language models generate text one token at a time, even when consecutive tokens are highly predictable given earlier context. We introduce MARS (Mask AutoRegreSsion), a lightweight fine-tuning method that teaches an instruction-tuned AR model to predict multiple tokens per forward pass. MARS adds no architectural modifications, no extra parameters, and produces a single model that can still be called exactly like the original AR model with no performance degradation. Unlike speculative decoding, which maintains a separate draft model alongside the target, or multi-head approaches such as Medusa, which attach additional prediction heads, MARS requires only continued training on existing instruction data. When generating one token per forward pass, MARS matches or exceeds the AR baseline on six standard benchmarks. When allowed to accept multiple tokens per step, it maintains baseline-level accuracy while achieving 1.5-1.7x throughput. We further develop a block-level KV caching strategy for batch inference, achieving up to 1.71x wall-clock speedup over AR with KV cache on Qwen2.5-7B. Finally, MARS supports real-time speed adjustment via confidence thresholding: under high request load, the serving system can increase throughput on the fly without swapping models or restarting, providing a practical latency-quality knob for deployment.

Ziqi Jin, Lei Wang, Ziwei Luo, Aixin Sun• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIF-Eval 0-shot
Score68.2
55
ReasoningBBH 3-shot
BBH 3-shot Score54.6
49
Code GenerationHumanEval 0-shot
Score81.7
14
General KnowledgeMMLU-Pro 0-shot
MMLU-Pro Score (0-shot)44.4
9
Graduate-level Q&AGPQA 0-shot
GPQA Accuracy (0-shot)26.6
9
Showing 5 of 5 rows

Other info

GitHub

Follow for update