From Bytes to Ideas: Language Modeling with Autoregressive U-Nets
About
Tokenization imposes a fixed granularity on the input text, freezing how a language model operates on data and how far in the future it predicts. Byte Pair Encoding (BPE) and similar schemes split text once, build a static vocabulary, and leave the model stuck with that choice. We relax this rigidity by introducing an autoregressive U-Net that learns to embed its own tokens as it trains. The network reads raw bytes, pools them into words, then pairs of words, then up to 4 words, giving it a multi-scale view of the sequence. At deeper stages, the model must predict further into the future -- anticipating the next few words rather than the next byte -- so deeper stages focus on broader semantic patterns while earlier stages handle fine details. When carefully tuning and controlling pretraining compute, shallow hierarchies tie strong BPE baselines, and deeper hierarchies have a promising trend. Because tokenization now lives inside the model, the same system can handle character-level tasks and carry knowledge across low-resource languages.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Commonsense Reasoning | HellaSwag | Accuracy50.34 | 1891 | |
| Commonsense Reasoning | WinoGrande | Accuracy54.12 | 1085 | |
| Question Answering | ARC-E | Accuracy72.91 | 416 | |
| Question Answering | BoolQ | Accuracy73.85 | 317 | |
| Question Answering | ARC-C | Accuracy0.3743 | 87 | |
| Physical Commonsense Reasoning | PIQA | Accuracy74.87 | 78 |