Arctic-Embed 2.0: Multilingual Retrieval Without Compromise
About
This paper presents the training methodology of Arctic-Embed 2.0, a set of open-source text embedding models built for accurate and efficient multilingual retrieval. While prior works have suffered from degraded English retrieval quality, Arctic-Embed 2.0 delivers competitive retrieval quality on multilingual and English-only benchmarks, and supports Matryoshka Representation Learning (MRL) for efficient embedding storage with significantly lower compressed quality degradation compared to alternatives. We detail the design and implementation, presenting several important open research questions that arose during model development. We conduct experiments exploring these research questions and include extensive discussion aimed at fostering further discussion in this field.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Information Retrieval | BEIR (test) | -- | 90 | |
| Text Embedding | MTEB English v2 | Mean Score63.6 | 68 | |
| Information Retrieval | BEIR | -- | 62 | |
| Multilingual Text Embedding | MTEB Multilingual | Mean Score (Task)57 | 29 | |
| Multilingual Long-context Retrieval | MLDR | nDCG@1034 | 28 | |
| Multilingual Retrieval | MTEB Multilingual v2 | -- | 28 | |
| Retrieval | MTEB-E English v2 | MTEB-E Retrieval Score58.56 | 16 | |
| Multilingual Document Retrieval | MIRACL (Evaluation set) | nDCG@1064.9 | 14 | |
| Multilingual Information Retrieval | MTEB-DE 4 subsets (test) | nDCG@1055.9 | 11 | |
| Multilingual Information Retrieval | MTEB-FR 5 subsets (test) | nDCG@1054.5 | 11 |