Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

SpaceByte: Towards Deleting Tokenization from Large Language Modeling

About

Tokenization is widely used in large language models because it significantly improves performance. However, tokenization imposes several disadvantages, such as performance biases, increased adversarial vulnerability, decreased character-level modeling performance, and increased modeling complexity. To address these disadvantages without sacrificing performance, we propose SpaceByte, a novel byte-level decoder architecture that closes the performance gap between byte-level and subword autoregressive language modeling. SpaceByte consists of a byte-level Transformer model, but with extra larger transformer blocks inserted in the middle of the layers. We find that performance is significantly improved by applying these larger blocks only after certain bytes, such as space characters, which typically denote word boundaries. Our experiments show that for a fixed training and inference compute budget, SpaceByte outperforms other byte-level architectures and roughly matches the performance of tokenized Transformer architectures.

Kevin Slagle• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy48.76
1891
Commonsense ReasoningWinoGrande
Accuracy53.15
1085
Question AnsweringARC-E
Accuracy71.12
416
Question AnsweringBoolQ
Accuracy72.04
317
Language ModelingPG-19 (test)--
110
Question AnsweringARC-C
Accuracy0.3605
87
Physical Commonsense ReasoningPIQA
Accuracy69.18
78
Language ModelingSTORIES (test)
Bits Per Byte0.833
6
Showing 8 of 8 rows

Other info

Code

Follow for update