Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

jina-embeddings-v5-text: Task-Targeted Embedding Distillation

About

Text embedding models are widely used for semantic similarity tasks, including information retrieval, clustering, and classification. General-purpose models are typically trained with single- or multi-stage processes using contrastive loss functions. We introduce a novel training regimen that combines model distillation techniques with task-specific contrastive loss to produce compact, high-performance embedding models. Our findings suggest that this approach is more effective for training small models than purely contrastive or distillation-based training paradigms alone. Benchmark scores for the resulting models, jina-embeddings-v5-text-small and jina-embeddings-v5-text-nano, exceed or match the state-of-the-art for models of similar size. jina-embeddings-v5-text models additionally support long texts (up to 32k tokens) in many languages, and generate embeddings that remain robust under truncation and binary quantization. Model weights are publicly available, hopefully inspiring further advances in embedding model development.

Mohammad Kalim Akram, Saba Sturua, Nastia Havriushenko, Quentin Herreros, Michael G\"unther, Maximilian Werk, Han Xiao• 2026

Related benchmarks

TaskDatasetResultRank
Information RetrievalBEIR--
59
Text EmbeddingMTEB English v2
Mean Score71.7
50
Multilingual Text EmbeddingMTEB Multilingual
Mean Score (Task)67
29
RetrievalMTEB-E English v2
MTEB-E Retrieval Score60.07
16
Multilingual RetrievalMTEB Multilingual v2
MTEB-M Score64.88
11
RetrievalRTEB Multilingual Public
RTEB66.84
11
RetrievalLongEmbed
Long Task Score66.39
11
Showing 7 of 7 rows

Other info

Follow for update