Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

OLMoE: Open Mixture-of-Experts Language Models

About

We introduce OLMoE, a fully open, state-of-the-art language model leveraging sparse Mixture-of-Experts (MoE). OLMoE-1B-7B has 7 billion (B) parameters but uses only 1B per input token. We pretrain it on 5 trillion tokens and further adapt it to create OLMoE-1B-7B-Instruct. Our models outperform all available models with similar active parameters, even surpassing larger ones like Llama2-13B-Chat and DeepSeekMoE-16B. We present various experiments on MoE training, analyze routing in our model showing high specialization, and open-source all aspects of our work: model weights, training data, code, and logs.

Niklas Muennighoff, Luca Soldaini, Dirk Groeneveld, Kyle Lo, Jacob Morrison, Sewon Min, Weijia Shi, Pete Walsh, Oyvind Tafjord, Nathan Lambert, Yuling Gu, Shane Arora, Akshita Bhagia, Dustin Schwenk, David Wadden, Alexander Wettig, Binyuan Hui, Tim Dettmers, Douwe Kiela, Ali Farhadi, Noah A. Smith, Pang Wei Koh, Amanpreet Singh, Hannaneh Hajishirzi• 2024

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU
Accuracy50.5
842
Commonsense ReasoningWinoGrande
Accuracy68.4
776
Question AnsweringARC Challenge
Accuracy55.2
749
Commonsense ReasoningPIQA
Accuracy80.6
647
Question AnsweringOpenBookQA
Accuracy44.4
465
Code GenerationHumanEval (test)--
444
Question AnsweringARC Easy
Accuracy76.89
386
Question AnsweringARC Easy
Normalized Acc78
385
Natural Language InferenceRTE
Accuracy71.84
367
Physical Interaction Question AnsweringPIQA
Accuracy80.1
323
Showing 10 of 35 rows

Other info

Follow for update