Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

F2LLM-v2: Inclusive, Performant, and Efficient Embeddings for a Multilingual World

About

We present F2LLM-v2, a new family of general-purpose, multilingual embedding models in 8 distinct sizes ranging from 80M to 14B. Trained on a newly curated composite of 60 million publicly available high-quality data samples, F2LLM-v2 supports more than 200 languages, with a particular emphasis on previously underserved mid- and low-resource languages. By integrating a two-stage LLM-based embedding training pipeline with matryoshka learning, model pruning, and knowledge distillation techniques, we present models that are far more efficient than previous LLM-based embedding models while retaining competitive performances. Extensive evaluations confirm that F2LLM-v2-14B ranks first on 11 MTEB benchmarks, while the smaller models in the family also set a new state of the art for resource-constrained applications. To facilitate open-source embedding model research, we release all models, data, code, and intermediate checkpoints.

Ziyin Zhang, Zihan Liao, Hang Yu, Peng Di, Rui Wang• 2026

Related benchmarks

TaskDatasetResultRank
Text EmbeddingMTEB (Massive Text Embedding Benchmark)
Average Score (Multi)68.74
8
Showing 1 of 1 rows

Other info

Follow for update