Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Arctic-Embed 2.0: Multilingual Retrieval Without Compromise

About

This paper presents the training methodology of Arctic-Embed 2.0, a set of open-source text embedding models built for accurate and efficient multilingual retrieval. While prior works have suffered from degraded English retrieval quality, Arctic-Embed 2.0 delivers competitive retrieval quality on multilingual and English-only benchmarks, and supports Matryoshka Representation Learning (MRL) for efficient embedding storage with significantly lower compressed quality degradation compared to alternatives. We detail the design and implementation, presenting several important open research questions that arose during model development. We conduct experiments exploring these research questions and include extensive discussion aimed at fostering further discussion in this field.

Puxuan Yu, Luke Merrick, Gaurav Nuti, Daniel Campos• 2024

Related benchmarks

TaskDatasetResultRank
Information RetrievalBEIR (test)--
90
Text EmbeddingMTEB English v2
Mean Score63.6
68
Information RetrievalBEIR--
62
Multilingual Text EmbeddingMTEB Multilingual
Mean Score (Task)57
29
Multilingual Long-context RetrievalMLDR
nDCG@1034
28
Multilingual RetrievalMTEB Multilingual v2--
28
RetrievalMTEB-E English v2
MTEB-E Retrieval Score58.56
16
Multilingual Document RetrievalMIRACL (Evaluation set)
nDCG@1064.9
14
Multilingual Information RetrievalMTEB-DE 4 subsets (test)
nDCG@1055.9
11
Multilingual Information RetrievalMTEB-FR 5 subsets (test)
nDCG@1054.5
11
Showing 10 of 17 rows

Other info

Follow for update