EmbeddingGemma: Powerful and Lightweight Text Representations
About
We introduce EmbeddingGemma, a new lightweight, open text embedding model based on the Gemma 3 language model family. Our innovative training recipe strategically captures knowledge from larger models via encoder-decoder initialization and geometric embedding distillation. We improve model robustness and expressiveness with a spread-out regularizer, and ensure generalizability by merging checkpoints from varied, optimized mixtures. Evaluated on the Massive Text Embedding Benchmark (MTEB) across multilingual, English, and code domains, EmbeddingGemma (300M) achieves state-of-the-art results. Notably, it outperforms prior top models, both proprietary and open, with fewer than 500M parameters, and provides performance comparable to models double its size, offering an exceptional performance-to-cost ratio. Remarkably, this lead persists when quantizing model weights or truncating embedding outputs. This makes EmbeddingGemma particularly well-suited for low-latency and high-throughput use cases such as on-device applications. We provide ablation studies exploring our key design choices. We release EmbeddingGemma to the community to promote further research.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Information Retrieval | BEIR | -- | 59 | |
| Text Embedding | MTEB English v2 | Mean Score69.7 | 50 | |
| Multilingual Text Embedding | MTEB Multilingual | Mean Score (Task)61.1 | 29 | |
| Text Embedding | MTEB Turkish (test) | Overall MTEB Score65.42 | 23 | |
| Retrieval | MTEB-E English v2 | MTEB-E Retrieval Score55.69 | 16 | |
| Sequential Recommendation | Amazon Video Games 2023 (test) | Recall@102.44 | 15 | |
| Multilingual Retrieval | MTEB Multilingual v2 | MTEB-M Score62.49 | 11 | |
| Retrieval | RTEB Multilingual Public | RTEB63.75 | 11 | |
| Retrieval | LongEmbed | Long Task Score55.29 | 11 | |
| Retrieval | Legal | Legal Score50.63 | 10 |