Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Route to Reason: Adaptive Routing for LLM and Reasoning Strategy Selection

About

The inherent capabilities of a language model (LM) and the reasoning strategies it employs jointly determine its performance in reasoning tasks. While test-time scaling is regarded as an effective approach to tackling complex reasoning tasks, it incurs substantial computational costs and often leads to "overthinking", where models become trapped in "thought pitfalls". To address this challenge, we propose Route-To-Reason (RTR), a novel unified routing framework that dynamically allocates both LMs and reasoning strategies according to task difficulty under budget constraints. RTR learns compressed representations of both expert models and reasoning strategies, enabling their joint and adaptive selection at inference time. This method is low-cost, highly flexible, and can be seamlessly extended to arbitrary black-box or white-box models and strategies, achieving true plug-and-play functionality. Extensive experiments across seven open source models and four reasoning strategies demonstrate that RTR achieves an optimal trade-off between accuracy and computational efficiency among all baselines, achieving higher accuracy than the best single model while reducing token usage by over 60%.

Zhihong Pan, Kai Zhang, Yuze Zhao, Yupeng Han• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGame of 24
Accuracy80
103
Question AnsweringWikiQA
Accuracy26
29
Question AnsweringTATQA
F17.96
27
Multi-hop Question AnsweringMoreHopQA
Accuracy73
25
Continual routing2WikiMultiHop
Accuracy59.4
22
Continual routingGSM8K
Accuracy91.6
22
Continual routingAverage
Accuracy74.7
22
Continual routingMMLU
Accuracy73.7
22
Multi-hop Question AnsweringHotpotQA
Accuracy79
15
Explorative ReasoningGame of 24 (test)
Accuracy80
11
Showing 10 of 34 rows

Other info

Follow for update