Spelling Bee Embeddings for Language Modeling
About
We introduce a simple modification to the embedding layer. The key change is to infuse token embeddings with information about their spelling. Models trained with these embeddings improve not only on spelling, but also across standard benchmarks. We conduct scaling studies for models with 40M to 800M parameters, which suggest that the improvements are equivalent to needing about 8% less compute and data to achieve the same test loss.
Markus N. Rabe, Judith Clymo, Zheren Dong• 2026
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Code Generation | HumanEval | -- | 850 | |
| Language Modeling | SlimPajama (test) | -- | 23 | |
| General Language Evaluation | English lm-evaluation-harness | AGIEval Acc (Norm)0.259 | 2 | |
| Multilingual Language Evaluation | Non-English lm-evaluation-harness | C-Eval25.71 | 2 | |
| Mathematical Reasoning | Math lm-evaluation-harness | GSM8k Accuracy1.21 | 2 |
Showing 5 of 5 rows