Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LoopRPT: Reinforcement Pre-Training for Looped Language Models

About

Looped language models (LoopLMs) perform iterative latent computation to refine internal representations, offering a promising alternative to explicit chain-of-thought (CoT) reasoning. However, existing reinforcement learning (RL) paradigms primarily target output tokens, creating a structural mismatch with looped architectures whose reasoning unfolds implicitly. In this work, we propose LoopRPT, a reinforcement pre-training framework tailored for LoopLMs. By reframing next-token prediction as a next-token reasoning task, LoopRPT assigns reinforcement signals directly to latent steps using an EMA teacher reference and noisy latent rollouts. This formulation enables RL to directly shape intermediate representations, compressing effective reasoning into fewer iterations. We instantiate LoopRPT on the Ouro architecture across multiple model scales. Results demonstrate that LoopRPT consistently improves per-step representation quality, achieving Pareto dominance in accuracy-computation trade-offs. Notably, significant gains on hard tokens indicate that LoopRPT enhances early-stage reasoning rather than merely encouraging premature exits. Our findings highlight reinforcement pre-training as a principled paradigm for learning efficient latent reasoning in LoopLMs.

Guo Tang, Shixin Jiang, Heng Chang, Nuo Chen, Yuhan Li, Huiming Fan, Jia Li, Ming Liu, Bing Qin• 2026

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy80.03
1891
Code GenerationHumanEval--
1036
Language UnderstandingMMLU
Accuracy73.91
825
ReasoningBBH
Accuracy78.24
672
Code GenerationHumanEval+--
383
Commonsense ReasoningWinoGrande
Accuracy76.47
372
Language UnderstandingMMLU-Pro
Accuracy54.19
87
Code GenerationMBPP
Accuracy77.24
74
Question AnsweringARC-C
Accuracy (ARC-C)66.89
46
Code GenerationMBPP+
Accuracy65.08
29
Showing 10 of 13 rows

Other info

Follow for update