Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Spelling Bee Embeddings for Language Modeling

About

We introduce a simple modification to the embedding layer. The key change is to infuse token embeddings with information about their spelling. Models trained with these embeddings improve not only on spelling, but also across standard benchmarks. We conduct scaling studies for models with 40M to 800M parameters, which suggest that the improvements are equivalent to needing about 8% less compute and data to achieve the same test loss.

Markus N. Rabe, Judith Clymo, Zheren Dong• 2026

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval--
850
Language ModelingSlimPajama (test)--
23
General Language EvaluationEnglish lm-evaluation-harness
AGIEval Acc (Norm)0.259
2
Multilingual Language EvaluationNon-English lm-evaluation-harness
C-Eval25.71
2
Mathematical ReasoningMath lm-evaluation-harness
GSM8k Accuracy1.21
2
Showing 5 of 5 rows

Other info

Follow for update