Olmo Hybrid: From Theory to Practice and Back
About
Recent work has demonstrated the potential of non-transformer language models, especially linear recurrent neural networks (RNNs) and hybrid models that mix recurrence and attention. Yet there is no consensus on whether the potential benefits of these new architectures justify the risk and effort of scaling them up. To address this, we provide evidence for the advantages of hybrid models over pure transformers on several fronts. First, theoretically, we show that hybrid models do not merely inherit the expressivity of transformers and linear RNNs, but can express tasks beyond both, such as code execution. Putting this theory to practice, we train Olmo Hybrid, a 7B-parameter model largely comparable to Olmo 3 7B but with the sliding window layers replaced by Gated DeltaNet layers. We show that Olmo Hybrid outperforms Olmo 3 across standard pretraining and mid-training evaluations, demonstrating the benefit of hybrid models in a controlled, large-scale setting. We find that the hybrid model scales significantly more efficiently than the transformer, explaining its higher performance. However, its unclear why greater expressivity on specific formal problems should result in better scaling or superior performance on downstream tasks unrelated to those problems. To explain this apparent gap, we return to theory and argue why increased expressivity should translate to better scaling efficiency, completing the loop. Overall, our results suggest that hybrid models mixing attention and recurrent layers are a powerful extension to the language modeling paradigm: not merely to reduce memory during inference, but as a fundamental way to obtain more expressive models that scale better during pretraining.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Generative Question Answering | Bolmo Evaluation Suite GenQA 7B | GenQA Average72.9 | 39 | |
| Mathematical Reasoning | OlmoBaseEval Math (GSM8k, GSM Symbolic, MATH) | Math Aggregate Score55.1 | 34 | |
| Code Generation | OlmoBaseEval Code BigCodeBench, HumanEval, DeepSeek LeetCode, DS 1000, MBPP, MultiPL | OlmoBaseEval Code Score32.4 | 34 | |
| Long-context retrieval | RULER | Retrieval Accuracy (8K)91.4 | 34 | |
| Multiple Choice Non-STEM Question Answering | OlmoBaseEval MC Non-STEM (MMLU Humanities/Social Sci, CSQA, PiQA, SocialIQA, CoQA, DROP, Jeopardy, NaturalQs, SQuAD) | Aggregate Score80.4 | 34 | |
| Multiple Choice STEM Question Answering | OlmoBaseEval MCSTEM | MCSTEM Score70 | 22 | |
| General Language Model Evaluation | OlmoBaseEval HeldOut (LBPP, BBH, MMLU Pro, etc.) | LBPP Score16.8 | 10 |