Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

When Perplexity Lies: Generation-Focused Distillation of Hybrid Sequence Models

About

Converting a pretrained Transformer into a more efficient hybrid model through distillation offers a promising approach to reducing inference costs. However, achieving high-quality generation in distilled models requires careful joint design of both the student architecture and the distillation process. Many prior distillation works evaluate downstream multiple-choice benchmarks by ranking candidate answers with log-likelihood rather than requiring autoregressive generation, which can obscure important differences in model quality. For example, we show that a 7B parameter distilled model that nearly matches its teacher to within 0.2\,pp under log-likelihood scoring actually falls behind by 20.8\,pp when the model must generate answers autoregressively. We propose a Hybrid Kimi Delta Attention (Hybrid-KDA) architecture paired with GenDistill, a multi-stage distillation pipeline, and use generation-based evaluation throughout to guide design decisions. Applying this approach to Qwen3-0.6B, we systematically ablate six design axes: training objective, loss masking, training duration, dataset selection, parameter freezing, and architecture choice. We find that log-likelihood-based evaluation consistently underestimates the gap between teacher and student, and can in some cases reverse the ranking of design choices, meaning that conclusions drawn from perplexity-only evaluation may be misleading. Among the factors we study, dataset selection, completion-only masking, and freezing attention layers during post-training have the largest impact on generation quality. Our best Hybrid-KDA model retains 86--90\% of teacher accuracy on knowledge benchmarks while reducing KV cache memory by up to 75\% and improving time-to-first-token by 2--4$\times$ at 128K-token contexts.

Juan Gabriel Kostelec, Xiang Wang, Axel Laborieux, Christos Sourmpis, Qinghai Guo• 2026

Related benchmarks

TaskDatasetResultRank
Instruction FollowingIFEval
IFEval Accuracy46.7
625
Common Sense ReasoningWinoGrande
Accuracy50.7
189
ReasoningARC Easy--
187
Long-context Language UnderstandingLongBench (test)
Average Score14.5
147
ReasoningARC Challenge
Accuracy48.4
93
Code ReasoningHumanEval
HumanEval Score19.9
40
ReasoningBig-Bench Hard (BBH)
Accuracy18.9
33
KnowledgeCMMLU
Knowledge Score38.7
16
ReasoningGSM8K
Accuracy (GSM8K)34.5
14
Commonsense ReasoningHellaSwag
Accuracy30
9
Showing 10 of 13 rows

Other info

Follow for update