mmBERT: A Modern Multilingual Encoder with Annealed Language Learning
About
Encoder-only languages models are frequently used for a variety of standard machine learning tasks, including classification and retrieval. However, there has been a lack of recent research for encoder models, especially with respect to multilingual models. We introduce mmBERT, an encoder-only language model pretrained on 3T tokens of multilingual text in over 1800 languages. To build mmBERT we introduce several novel elements, including an inverse mask ratio schedule and an inverse temperature sampling ratio. We add over 1700 low-resource languages to the data mix only during the decay phase, showing that it boosts performance dramatically and maximizes the gains from the relatively small amount of training data. Despite only including these low-resource languages in the short decay phase we achieve similar classification performance to models like OpenAI's o3 and Google's Gemini 2.5 Pro. Overall, we show that mmBERT significantly outperforms the previous generation of models on classification and retrieval tasks -- on both high and low-resource languages.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Text Embedding | MTEB | MTEB Score54.48 | 45 | |
| Information Retrieval | TREC-COVID | NDCG@1030.77 | 44 | |
| Cross-lingual Language Understanding | XTREME | XNLI Accuracy80.54 | 43 | |
| Text Embedding | MTEB Turkish (test) | Overall MTEB Score39.65 | 23 | |
| Retrieval | SCIDOCS | nDCG@1010 | 18 | |
| Text Embedding | AfriMTEB (AMH, GAZ, HAU, IBO, KIN, SWA, XHO, YOR, ZUL) Lite (test) | Hate Speech Score48.2 | 16 | |
| Text Embedding | AfriMTEB Full | Btxt Score3 | 15 | |
| Long-context Language Understanding | Long tasks 4 tasks (val) | Long Tasks Score82.08 | 13 | |
| Language Understanding | Other tasks (9 tasks) (val) | Other Tasks Score80.14 | 13 | |
| General Language Understanding | All tasks (25 tasks) (val) | Overall Accuracy80.24 | 13 |