Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

FOCUS: Effective Embedding Initialization for Monolingual Specialization of Multilingual Models

About

Using model weights pretrained on a high-resource language as a warm start can reduce the need for data and compute to obtain high-quality language models for other, especially low-resource, languages. However, if we want to use a new tokenizer specialized for the target language, we cannot transfer the source model's embedding matrix. In this paper, we propose FOCUS - Fast Overlapping Token Combinations Using Sparsemax, a novel embedding initialization method that initializes the embedding matrix effectively for a new tokenizer based on information in the source model's embedding matrix. FOCUS represents newly added tokens as combinations of tokens in the overlap of the source and target vocabularies. The overlapping tokens are selected based on semantic similarity in an auxiliary static token embedding space. We focus our study on using the multilingual XLM-R as a source model and empirically show that FOCUS outperforms random initialization and previous work in language modeling and on a range of downstream tasks (NLI, QA, and NER).

Konstantin Dobler, Gerard de Melo• 2023

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI--
111
Language ModelingCulturaX v1.0 (val)
Language Score (ar)6.8
14
Causal ReasoningXCOPA
Accuracy (zh)55.4
12
Paraphrase IdentificationPAWS-X
Accuracy (de)51.1
12
Cross-lingual GeneralizationCross-lingual Transfer Summary
Avg Score47.3
12
Commonsense ReasoningXStoryCloze
Accuracy (en)61
12
Question AnsweringKnowledge-based Benchmarks Vietnamese
ARC Score27.86
8
Question AnsweringKnowledge-based Benchmarks German
ARC Score26.95
8
Question AnsweringKnowledge-based Benchmarks Arabic
ARC Score25.66
8
Machine Reading ComprehensionMLQA English - Target v1.0 (test)
German EM27.21
4
Showing 10 of 11 rows

Other info

Follow for update