Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach
About
We study a novel language model architecture that is capable of scaling test-time computation by implicitly reasoning in latent space. Our model works by iterating a recurrent block, thereby unrolling to arbitrary depth at test-time. This stands in contrast to mainstream reasoning models that scale up compute by producing more tokens. Unlike approaches based on chain-of-thought, our approach does not require any specialized training data, can work with small context windows, and can capture types of reasoning that are not easily represented in words. We scale a proof-of-concept model to 3.5 billion parameters and 800 billion tokens. We show that the resulting model can improve its performance on reasoning benchmarks, sometimes dramatically, up to a computation load equivalent to 50 billion parameters.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | SVAMP | Accuracy54.8 | 368 | |
| Mathematical Reasoning | GSM8K | EM32.6 | 115 | |
| Language Modeling | FineWeb-Edu (test) | Perplexity (Test)26.55 | 49 | |
| Mathematical Reasoning | GSM-Symbolic | GSM-Sym Accuracy73.6 | 43 | |
| Commonsense Reasoning | CommonsenseQA (CSQA) | Accuracy74.2 | 38 | |
| Language Modeling | The Pile (test) | PPL (The Pile Test)11.6 | 27 | |
| Code Generation | MBPP | Accuracy56 | 25 | |
| Code Reasoning | MBPP | Accuracy31.5 | 23 | |
| Mathematical Reasoning | GSM8K | Accuracy78.4 | 19 | |
| Reasoning | Language Task Suite (COPA, HS, LB, OBQA, PIQA, Race, SciQ, ARC, SIQA, WG) zero-shot standard | COPA66 | 17 |