Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Test-time Recursive Thinking: Self-Improvement without External Feedback

About

Modern Large Language Models (LLMs) have shown rapid improvements in reasoning capabilities, driven largely by reinforcement learning (RL) with verifiable rewards. Here, we ask whether these LLMs can self-improve without the need for additional training. We identify two core challenges for such systems: (i) efficiently generating diverse, high-quality candidate solutions, and (ii) reliably selecting correct answers in the absence of ground-truth supervision. To address these challenges, we propose Test-time Recursive Thinking (TRT), an iterative self-improvement framework that conditions generation on rollout-specific strategies, accumulated knowledge, and self-generated verification signals. Using TRT, open-source models reach 100% accuracy on AIME-25/24, and on LiveCodeBench's most difficult problems, closed-source models improve by 10.4-14.8 percentage points without external feedback.

Yufan Zhuang, Chandan Singh, Liyuan Liu, Yelong Shen, Dinghuai Zhang, Jingbo Shang, Jianfeng Gao, Weizhu Chen• 2026

Related benchmarks

TaskDatasetResultRank
Visual Grounded ReasoningTreeBench
Overall Score48.9
128
Visual Perception and ReasoningV*Bench
Attribute Score92.2
41
High-Resolution Multimodal ReasoningHR-Bench-4K
Overall Score86.2
40
High-Resolution Multimodal ReasoningHR-Bench-8K
Overall Score83.9
40
PerceptionMME-RealWorld-Lite
Overall Score56.8
29
ReasoningMME-RealWorld-Lite
OCR Score81
20
Visual Question AnsweringVisualProbe Medium
Accuracy39.6
9
Visual Question AnsweringVisualProbe Hard
Accuracy40.6
9
Visual Question AnsweringVisualProbe (Overall)
Accuracy45.3
9
Visual Question AnsweringVisualProbe Easy
Accuracy59.7
9
Showing 10 of 10 rows

Other info

Follow for update