Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-Tokenizer Likelihood Scoring Algorithms for Language Model Distillation

About

Computing next-token likelihood ratios between two language models (LMs) is a standard task in training paradigms such as knowledge distillation. Since this requires both models to share the same probability space, it becomes challenging when the teacher and student LMs use different tokenizers, for instance, when edge-device deployment necessitates a smaller vocabulary size to lower memory overhead. In this work, we address this vocabulary misalignment problem by uncovering an implicit recursive structure in the commonly deployed Byte-Pair Encoding (BPE) algorithm and utilizing it to create a probabilistic framework for cross-tokenizer likelihood scoring. Our method enables sequence likelihood evaluation for vocabularies different from the teacher model native tokenizer, addressing two specific scenarios: when the student vocabulary is a subset of the teacher vocabulary, and the general case where it is arbitrary. In the subset regime, our framework computes exact likelihoods and provides next-token probabilities for sequential sampling with only O(1) model evaluations per token. When used for distillation, this yields up to a 12% reduction in memory footprint for the Qwen2.5-1.5B model while also improving baseline performance up to 4% on the evaluated tasks. For the general case, we introduce a rigorous lossless procedure that leverages BPE recursive structure, complemented by a fast approximation that keeps large-vocabulary settings practical. Applied to distillation for mathematical reasoning, our approach improves GSM8K accuracy by more than 2% over the current state of the art.

Buu Phan, Ashish Khisti, Karen Ullrich• 2025

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@152.4
850
Mathematical ReasoningGSM8K (test)
Accuracy55.6
797
Physical Interaction Question AnsweringPIQA
Accuracy75.5
323
Boolean Question AnsweringBoolQ
Accuracy78.9
307
Sentence CompletionHellaSwag
Accuracy49.5
133
Commonsense ReasoningCommonsenseQA
Accuracy75.7
132
Multiple-choice Question AnsweringARC Easy
Accuracy74
122
Code GenerationMBPP
Pass@144.6
113
Multiple-choice Question AnsweringARC Challenge
Acc46.9
106
SummarizationDialogSum 1.5k examples (val)
ROUGE-L33.9
11
Showing 10 of 10 rows

Other info

Follow for update