Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

mmBERT: A Modern Multilingual Encoder with Annealed Language Learning

About

Encoder-only languages models are frequently used for a variety of standard machine learning tasks, including classification and retrieval. However, there has been a lack of recent research for encoder models, especially with respect to multilingual models. We introduce mmBERT, an encoder-only language model pretrained on 3T tokens of multilingual text in over 1800 languages. To build mmBERT we introduce several novel elements, including an inverse mask ratio schedule and an inverse temperature sampling ratio. We add over 1700 low-resource languages to the data mix only during the decay phase, showing that it boosts performance dramatically and maximizes the gains from the relatively small amount of training data. Despite only including these low-resource languages in the short decay phase we achieve similar classification performance to models like OpenAI's o3 and Google's Gemini 2.5 Pro. Overall, we show that mmBERT significantly outperforms the previous generation of models on classification and retrieval tasks -- on both high and low-resource languages.

Marc Marone, Orion Weller, William Fleshman, Eugene Yang, Dawn Lawrie, Benjamin Van Durme• 2025

Related benchmarks

TaskDatasetResultRank
Text EmbeddingMTEB
MTEB Score54.48
45
Text EmbeddingMTEB Turkish (test)
Overall MTEB Score39.65
23
RetrievalLegal
Legal Score41.71
10
Legal RetrievalTurkish Legal
Legal Score41.71
9
Turkish Natural Language Understanding and RetrievalTabiBench 1.0 (test)
Text Clf F182.54
5
Showing 5 of 5 rows

Other info

Follow for update