Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Olmo Hybrid: From Theory to Practice and Back

About

Recent work has demonstrated the potential of non-transformer language models, especially linear recurrent neural networks (RNNs) and hybrid models that mix recurrence and attention. Yet there is no consensus on whether the potential benefits of these new architectures justify the risk and effort of scaling them up. To address this, we provide evidence for the advantages of hybrid models over pure transformers on several fronts. First, theoretically, we show that hybrid models do not merely inherit the expressivity of transformers and linear RNNs, but can express tasks beyond both, such as code execution. Putting this theory to practice, we train Olmo Hybrid, a 7B-parameter model largely comparable to Olmo 3 7B but with the sliding window layers replaced by Gated DeltaNet layers. We show that Olmo Hybrid outperforms Olmo 3 across standard pretraining and mid-training evaluations, demonstrating the benefit of hybrid models in a controlled, large-scale setting. We find that the hybrid model scales significantly more efficiently than the transformer, explaining its higher performance. However, its unclear why greater expressivity on specific formal problems should result in better scaling or superior performance on downstream tasks unrelated to those problems. To explain this apparent gap, we return to theory and argue why increased expressivity should translate to better scaling efficiency, completing the loop. Overall, our results suggest that hybrid models mixing attention and recurrent layers are a powerful extension to the language modeling paradigm: not merely to reduce memory during inference, but as a fundamental way to obtain more expressive models that scale better during pretraining.

William Merrill, Yanhong Li, Tyler Romero, Anej Svete, Caia Costello, Pradeep Dasigi, Dirk Groeneveld, David Heineman, Bailey Kuehl, Nathan Lambert, Chuan Li, Kyle Lo, Saumya Malik, DJ Matusz, Benjamin Minixhofer, Jacob Morrison, Luca Soldaini, Finbarr Timbers, Pete Walsh, Noah A. Smith, Hannaneh Hajishirzi, Ashish Sabharwal• 2026

Related benchmarks

TaskDatasetResultRank
Generative Question AnsweringBolmo Evaluation Suite GenQA 7B
GenQA Average72.9
39
Mathematical ReasoningOlmoBaseEval Math (GSM8k, GSM Symbolic, MATH)
Math Aggregate Score55.1
34
Code GenerationOlmoBaseEval Code BigCodeBench, HumanEval, DeepSeek LeetCode, DS 1000, MBPP, MultiPL
OlmoBaseEval Code Score32.4
34
Long-context retrievalRULER
Retrieval Accuracy (8K)91.4
34
Multiple Choice Non-STEM Question AnsweringOlmoBaseEval MC Non-STEM (MMLU Humanities/Social Sci, CSQA, PiQA, SocialIQA, CoQA, DROP, Jeopardy, NaturalQs, SQuAD)
Aggregate Score80.4
34
Multiple Choice STEM Question AnsweringOlmoBaseEval MCSTEM
MCSTEM Score70
22
General Language Model EvaluationOlmoBaseEval HeldOut (LBPP, BBH, MMLU Pro, etc.)
LBPP Score16.8
10
Showing 7 of 7 rows

Other info

Follow for update