Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Accelerating Multilingual Language Model for Excessively Tokenized Languages

About

Recent advancements in large language models (LLMs) have remarkably enhanced performances on a variety of tasks in multiple languages. However, tokenizers in LLMs trained primarily on English-centric corpora often overly fragment a text into character or Unicode-level tokens in non-Roman alphabetic languages, leading to inefficient text generation. We introduce a simple yet effective framework to accelerate text generation in such languages. Our approach involves employing a new language model head with a vocabulary set tailored to a specific target language for a pre-trained LLM. This is followed by fine-tuning the new head while incorporating a verification step to ensure the model's performance is preserved. We show that this targeted fine-tuning, while freezing other model parameters, effectively reduces token fragmentation for the target language. Our extensive experiments demonstrate that the proposed framework increases the generation speed by a factor of 1.7 while maintaining the performance of pre-trained multilingual models on target monolingual tasks.

Jimin Hong, Gibbeum Lee, Jaewoong Cho• 2024

Related benchmarks

TaskDatasetResultRank
Machine TranslationJapanese-English (test)
BLEU26.9
8
Machine TranslationFLoRes-200 Korean (test)
BLEU21.7
5
SummarizationXLSum Japanese (test)
ROUGE-211.6
5
Machine TranslationFLoRes-200 Japanese (test)
BLEU24.3
5
SummarizationXLSum Korean (test)
ROUGE-220.3
5
SummarizationJapanese Summarization (test)
ROUGE-213.2
2
SummarizationKorean Summarization (test)
ROUGE-222.8
2
SummarizationKorean Summarization (Ko) (test)
ROUGE-212.8
2
SummarizationJapanese Summarization JA (test)
ROUGE-29.6
2
TranslationKorean Translation (test)
BLEU18.3
2
Showing 10 of 12 rows

Other info

Follow for update