Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Making Monolingual Sentence Embeddings Multilingual using Knowledge Distillation

About

We present an easy and efficient method to extend existing sentence embedding models to new languages. This allows to create multilingual versions from previously monolingual models. The training is based on the idea that a translated sentence should be mapped to the same location in the vector space as the original sentence. We use the original (monolingual) model to generate sentence embeddings for the source language and then train a new system on translated sentences to mimic the original model. Compared to other methods for training multilingual sentence embeddings, this approach has several advantages: It is easy to extend existing models with relatively few samples to new languages, it is easier to ensure desired properties for the vector space, and the hardware requirements for training is lower. We demonstrate the effectiveness of our approach for 50+ languages from various language families. Code to extend sentence embeddings models to more than 400 languages is publicly available.

Nils Reimers, Iryna Gurevych• 2020

Related benchmarks

TaskDatasetResultRank
Text EmbeddingMTEB English v2
Mean Score57.71
50
Question AnsweringNaturalQuestions processed
Accuracy60.75
22
Multi-hop Question AnsweringHotpotQA 10 related documents
F142.92
21
Question AnsweringTriviaQA adversarial Contriever retrieved top 10
EM49.49
19
Cross-lingual Semantic SimilarityXL (test)
Spearman's rho82.4
12
Table Row AlignmentINFOSYNC Match (test)
Accuracy (EN-FR)80.98
11
Table Row AlignmentINFOSYNC UnMatch (test)
Alignment Score (EN <-> FR)82.68
11
Cross-lingual Semantic SimilarityXL s. (test)
Spearman's Rho82.9
6
Bitext MiningBUCC (test)
Score (de-en)90.8
6
Bitext MiningTatoeba 112 languages (test)
Accuracy67.1
4
Showing 10 of 12 rows

Other info

Follow for update