Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Teaching Old Tokenizers New Words: Efficient Tokenizer Adaptation for Pre-trained Models

About

Tokenizer adaptation plays an important role in transferring pre-trained language models to new domains or languages. In this work, we address two complementary aspects of this process: vocabulary extension and pruning. The common approach to extension trains a new tokenizer on domain-specific text and appends the tokens that do not overlap with the existing vocabulary, which often results in many tokens that are unreachable or never used. We propose continued BPE training, which adapts a pre-trained tokenizer by continuing the BPE merge learning process on new data. Experiments across multiple languages and model families show that this approach improves tokenization efficiency and leads to better utilization of added vocabulary. We also introduce leaf-based vocabulary pruning, which removes redundant tokens while preserving model quality. Together, these methods provide practical tools for controlled vocabulary modification, which we release as an open-source package.

Taido Purason, Pavel Chizhov, Ivan P. Yamshchikov, Mark Fishel• 2025

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningWinoGrande
Accuracy69.1
231
Causal ReasoningXCOPA ET
Accuracy71.8
8
Commonsense ReasoningWinogrande ET
Winogrande ET Accuracy65.1
8
Machine TranslationFLORES EN-ET v1 (test)
COMET83.5
8
Machine TranslationFLORES ET-EN v1 (test)
COMET0.831
8
Reading ComprehensionBelebele ET
Accuracy52.9
8
Reading ComprehensionBelebele EN
Accuracy74.2
8
Topic ClassificationSIB200 ET
Accuracy81.4
8
Topic ClassificationSIB200 EN
Accuracy78.9
8
Text CompressionEstonian Corpus
Compression Ratio4.46
6
Showing 10 of 11 rows

Other info

Follow for update