Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Tokenization is Sensitive to Language Variation

About

Variation in language is ubiquitous and often systematically linked to regional, social, and contextual factors. Tokenizers split texts into smaller units and might behave differently for less common linguistic forms. This might affect downstream LLM performance differently on two types of tasks: Tasks where the model should be robust to language variation (e.g., for semantic tasks like NLI, labels do not depend on whether a text uses British or American spelling) and tasks where the model should be sensitive to language variation (e.g., for form-based tasks like authorship verification, labels depend on whether a text uses British or American spelling). We pre-train BERT base models with the popular Byte-Pair Encoding algorithm to investigate how key tokenization design choices impact the performance of downstream models: the corpus used to train the tokenizer, the pre-tokenizer and the vocabulary size. We find that the best tokenizer varies on the two task types and that the pre-tokenizer has the biggest overall impact on performance. Further, we introduce a new approach to estimate tokenizer impact on downstream LLM performance, showing substantial improvement over metrics like R\'enyi efficiency. We encourage more work on language variation and its relation to tokenizers and thus LLM performance.

Anna Wegmann, Dong Nguyen, David Jurgens• 2025

Related benchmarks

TaskDatasetResultRank
Pearson correlation analysisTasks robust to language variation
Pearson Correlation (r)0.85
3
Pearson correlation analysisTasks sensitive to language variation
Pearson Correlation0.84
3
Showing 2 of 2 rows

Other info

Code

Follow for update