Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

ByteFlow: Language Modeling through Adaptive Byte Compression without a Tokenizer

About

Modern language models still rely on fixed, pre-defined subword tokenizations. Once a tokenizer is trained, the LM can only operate at this fixed level of granularity, which often leads to brittle and counterintuitive behaviors even in otherwise strong reasoning models. We introduce \textbf{ByteFlow Net}, a new hierarchical architecture that removes tokenizers entirely and instead enables models to learn their own segmentation of raw byte streams into semantically meaningful units. ByteFlow Net performs compression-driven segmentation based on the coding rate of latent representations, yielding adaptive boundaries \emph{while preserving a static computation graph via Top-$K$ selection}. Unlike prior self-tokenizing methods that depend on brittle heuristics with human-designed inductive biases, ByteFlow Net adapts its internal representation granularity to the input itself. Experiments demonstrate that this compression-based chunking strategy yields substantial performance gains, with ByteFlow Net outperforming both BPE-based Transformers and previous byte-level architectures. These results suggest that end-to-end, tokenizer-free modeling is not only feasible but also more effective, opening a path toward more adaptive and information-grounded language models.

Chunyuan Deng, Sanket Lokegaonkar, Colin Lockard, Besnik Fetahu, Nasser Zalmout, Xian Li• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy55.42
1891
Commonsense ReasoningWinoGrande
Accuracy56.93
1085
Question AnsweringARC-E
Accuracy75.87
416
Question AnsweringBoolQ
Accuracy76.48
317
Question AnsweringARC-C
Accuracy0.4036
87
Physical Commonsense ReasoningPIQA
Accuracy74.25
78
Character-level reasoningCUTE
Accuracy (CUTE Overall)51.2
3
Showing 7 of 7 rows

Other info

Follow for update