Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Towards Cross-Tokenizer Distillation: the Universal Logit Distillation Loss for LLMs

About

Deploying large language models (LLMs) of several billion parameters can be impractical in most industrial use cases due to constraints such as cost, latency limitations, and hardware accessibility. Knowledge distillation (KD) offers a solution by compressing knowledge from resource-intensive large models to smaller ones. Various strategies exist, some relying on the text generated by the teacher model and optionally utilizing his logits to enhance learning. However, these methods based on logits often require both teacher and student models to share the same tokenizer, limiting their applicability across different LLM families. In this paper, we introduce Universal Logit Distillation (ULD) loss, grounded in optimal transport, to address this limitation. Our experimental results demonstrate the effectiveness of ULD loss in enabling distillation across models with different architectures and tokenizers, paving the way to a more widespread use of distillation techniques.

Nicolas Boizard, Kevin El Haddad, C\'eline Hudelot, Pierre Colombo• 2024

Related benchmarks

TaskDatasetResultRank
Code GenerationHumanEval
Pass@148.7
850
Mathematical ReasoningGSM8K (test)
Accuracy47.1
797
Physical Interaction Question AnsweringPIQA
Accuracy75.1
323
Boolean Question AnsweringBoolQ
Accuracy78.4
307
Math ReasoningGSM8K (test)
Accuracy26.38
155
Sentence CompletionHellaSwag
Accuracy48.1
133
Commonsense ReasoningCommonsenseQA
Accuracy75.3
132
Multiple-choice Question AnsweringARC Easy
Accuracy72.2
122
Code GenerationMBPP
Pass@141.2
113
Multiple-choice Question AnsweringARC Challenge
Acc45.1
106
Showing 10 of 14 rows

Other info

Follow for update