Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Scaling up Test-Time Compute with Latent Reasoning: A Recurrent Depth Approach

About

We study a novel language model architecture that is capable of scaling test-time computation by implicitly reasoning in latent space. Our model works by iterating a recurrent block, thereby unrolling to arbitrary depth at test-time. This stands in contrast to mainstream reasoning models that scale up compute by producing more tokens. Unlike approaches based on chain-of-thought, our approach does not require any specialized training data, can work with small context windows, and can capture types of reasoning that are not easily represented in words. We scale a proof-of-concept model to 3.5 billion parameters and 800 billion tokens. We show that the resulting model can improve its performance on reasoning benchmarks, sometimes dramatically, up to a computation load equivalent to 50 billion parameters.

Jonas Geiping, Sean McLeish, Neel Jain, John Kirchenbauer, Siddharth Singh, Brian R. Bartoldson, Bhavya Kailkhura, Abhinav Bhatele, Tom Goldstein• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningSVAMP
Accuracy54.8
368
Mathematical ReasoningGSM8K
EM32.6
115
Language ModelingFineWeb-Edu (test)
Perplexity (Test)26.55
49
Mathematical ReasoningGSM-Symbolic
GSM-Sym Accuracy73.6
43
Commonsense ReasoningCommonsenseQA (CSQA)
Accuracy74.2
38
Language ModelingThe Pile (test)
PPL (The Pile Test)11.6
27
Code GenerationMBPP
Accuracy56
25
Code ReasoningMBPP
Accuracy31.5
23
Mathematical ReasoningGSM8K
Accuracy78.4
19
ReasoningLanguage Task Suite (COPA, HS, LB, OBQA, PIQA, Race, SciQ, ARC, SIQA, WG) zero-shot standard
COPA66
17
Showing 10 of 14 rows

Other info

Follow for update