Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

T$^2$: An Adaptive Test-Time Scaling Strategy for Contextual Question Answering

About

Recent advances in Large Language Models (LLMs) have demonstrated remarkable performance in Contextual Question Answering (CQA). However, prior approaches typically employ elaborate reasoning strategies regardless of question complexity, leading to low adaptability. Recent efficient test-time scaling methods introduce budget constraints or early stop mechanisms to avoid overthinking for straightforward questions. But they add human bias to the reasoning process and fail to leverage models' inherent reasoning capabilities. To address these limitations, we present T$^2$: Think-to-Think, a novel framework that dynamically adapts reasoning depth based on question complexity. T$^2$ leverages the insight that if an LLM can effectively solve similar questions using specific reasoning strategies, it can apply the same strategy to the original question. This insight enables to adoption of concise reasoning for straightforward questions while maintaining detailed analysis for complex problems. T$^2$ works through four key steps: decomposing questions into structural elements, generating similar examples with candidate reasoning strategies, evaluating these strategies against multiple criteria, and applying the most appropriate strategy to the original question. Experimental evaluation across seven diverse CQA benchmarks demonstrates that T$^2$ not only achieves higher accuracy than baseline methods but also reduces computational overhead by up to 25.2\%.

Zhengyi Zhao, Shubo Zhang, Zezhong Wang, Huimin Wang, Yutian Zhao, Bin Liang, Yefeng Zheng, Binyang Li, Kam-Fai Wong, Xian Wu• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy91.2
1362
Mathematical ReasoningMATH
Accuracy54.1
882
Mathematical ReasoningGSM8K
Accuracy84.9
499
General KnowledgeMMLU
MMLU General Knowledge Accuracy84.9
234
Logical reasoningLogiQA
LogiQA Accuracy75.6
181
General ReasoningMMLU
MMLU Accuracy86.7
156
Math ReasoningMATH
Accuracy52.6
121
Logical reasoningLogiQA
Accuracy76.4
100
General ReasoningStratQA
Accuracy85.1
91
Code GenerationMBPP
Accuracy70.5
90
Showing 10 of 18 rows

Other info

Follow for update