Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Massively Multilingual Sentence Embeddings for Zero-Shot Cross-Lingual Transfer and Beyond

About

We introduce an architecture to learn joint multilingual sentence representations for 93 languages, belonging to more than 30 different families and written in 28 different scripts. Our system uses a single BiLSTM encoder with a shared BPE vocabulary for all languages, which is coupled with an auxiliary decoder and trained on publicly available parallel corpora. This enables us to learn a classifier on top of the resulting embeddings using English annotated data only, and transfer it to any of the 93 languages without any modification. Our experiments in cross-lingual natural language inference (XNLI dataset), cross-lingual document classification (MLDoc dataset) and parallel corpus mining (BUCC dataset) show the effectiveness of our approach. We also introduce a new test set of aligned sentences in 112 languages, and show that our sentence embeddings obtain strong results in multilingual similarity search even for low-resource languages. Our implementation, the pre-trained encoder and the multilingual test set are available at https://github.com/facebookresearch/LASER

Mikel Artetxe, Holger Schwenk• 2018

Related benchmarks

TaskDatasetResultRank
Natural Language InferenceXNLI (test)
Average Accuracy70.2
167
AnalogyUniversal Analogy Sentence
Semantic Accuracy1.6
17
AnalogyUniversal Analogy Overall
Accuracy37.6
17
AnalogyUniversal Analogy Word
Semantic Accuracy26.9
17
AnalogyUniversal Analogy Phrase
Semantic Accuracy0.00e+0
17
Document ClassificationMLDoc 1.0 (test)
Accuracy (DE)0.927
12
Cross-lingual Semantic SimilarityXL (test)
Spearman's rho69
12
Question Paraphrase RetrievalGEOGRANNO (train-dev)
Top-1 Acc6.3
9
News document classificationMLDoc (test)
Error Rate (FR)21.97
9
Sentiment ClassificationCLS (test)
Accuracy (DE Books)84.15
8
Showing 10 of 28 rows

Other info

Code

Follow for update