Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

CoT2-Meta: Budgeted Metacognitive Control for Test-Time Reasoning

About

Recent test-time reasoning methods improve performance by generating more candidate chains or searching over larger reasoning trees, but they typically lack explicit control over when to expand, what to prune, how to repair, and when to abstain. We introduce CoT2-Meta, a training-free metacognitive reasoning framework that combines object-level chain-of-thought generation with meta-level control over partial reasoning trajectories. The framework integrates four components: strategy-conditioned thought generation, tree-structured search, an online process oracle for step-level reasoning evaluation, and a meta-controller that allocates computation through expansion, pruning, repair, stopping, and fallback decisions. Under matched inference budgets, CoT2-Meta consistently outperforms strong single-path, sampling-based, and search-based baselines, including ReST-MCTS. On the default backbone, it achieves 92.8 EM on MATH, 90.4 accuracy on GPQA, 98.65 EM on GSM8K, 75.8 accuracy on BBEH, 85.6 accuracy on MMMU-Pro, and 48.8 accuracy on HLE, with gains over the strongest non-CoT2-Meta baseline of +3.6, +5.2, +1.15, +2.0, +4.3, and +4.3 points, respectively. Beyond these core results, the framework remains effective across a broader 15-benchmark suite spanning knowledge and QA, multi-hop reasoning, coding, and out-of-distribution evaluation. Additional analyses show better compute scaling, improved calibration, stronger selective prediction, targeted repair behavior, and consistent gains across backbone families. These results suggest that explicit metacognitive control is a practical design principle for reliable and compute-efficient test-time reasoning systems.

Siyuan Ma, Bo Gao, Zikai Xiao, Hailong Wang, Xinlei Yu, Rui Qian, Jiayu Qian, Luqi Gong, Yang Liu• 2026

Related benchmarks

TaskDatasetResultRank
Multimodal ReasoningMMMU-Pro
Accuracy85.6
107
CodingHumanEval
Pass@172.8
103
ReasoningARC Challenge
Accuracy86.3
93
KnowledgeGPQA
Accuracy90.4
51
Science Question AnsweringGPQA
Accuracy91.5
46
Knowledge EvaluationNatural Questions (NQ) (Evaluation)
Accuracy45.4
45
CodingMBPP
Pass@1 Accuracy79.5
30
Multi-modal ReasoningMMMU-Pro
Accuracy85.6
28
Multi-hop ReasoningHotpotQA
Accuracy90.4
20
Mathematical ReasoningMATH
Accuracy92.8
12
Showing 10 of 19 rows

Other info

Follow for update