Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Distilling Token-Trained Models into Byte-Level Models

About

Byte Language Models (BLMs) have emerged as a promising direction for scaling language models beyond tokenization. However, existing BLMs typically require training from scratch on trillions of bytes, making them prohibitively expensive. In this paper, we propose an efficient distillation recipe that converts existing token-trained LLMs into BLMs while retaining comparable capabilities. Our recipe follows a two-stage curriculum: (1) Progressive Knowledge Distillation, which aligns byte-level representations with the embeddings of the token-trained teacher model; and (2) Byte-Level Supervised Fine-Tuning, which enables end-to-end generation entirely in the byte space. We validate our approach across multiple model families, including Llama, Qwen, and OLMo, and demonstrate that the distilled BLMs retain most of the teacher models' performance using only approximately 125B bytes.

Zishuo Bao, Jiaqi Leng, Junxiong Wang, Bowen Peng, Yucheng Lu• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy65.4
1460
Multi-task Language UnderstandingMMLU
Accuracy37.6
842
Commonsense ReasoningWinoGrande
Accuracy63.9
776
Physical Commonsense ReasoningPIQA
Accuracy75.7
329
Question AnsweringARC
Accuracy60
154
Commonsense ReasoningSocialIQA
Accuracy53.8
97
Zero-shot Language UnderstandingEvaluation Suite Zero-shot (LMB, HellA, PIQA, ARC-E, ARC-C, WINO, Open, MMLU)
ARC-E Accuracy77.1
25
Question AnsweringCSQA
Accuracy65.8
7
Showing 8 of 8 rows

Other info

Follow for update