Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Budget-Aware Anytime Reasoning with LLM-Synthesized Preference Data

About

We study the reasoning behavior of large language models (LLMs) under limited computation budgets. In such settings, producing useful partial solutions quickly is often more practical than exhaustive reasoning, which incurs high inference costs. Many real-world tasks, such as trip planning, require models to deliver the best possible output within a fixed reasoning budget. We introduce an anytime reasoning framework and the Anytime Index, a metric that quantifies how effectively solution quality improves as reasoning tokens increase. To further enhance efficiency, we propose an inference-time self-improvement method using LLM-synthesized preference data, where models learn from their own reasoning comparisons to produce better intermediate solutions. Experiments on NaturalPlan (Trip), AIME, and GPQA datasets show consistent gains across Grok-3, GPT-oss, GPT-4.1/4o, and LLaMA models, improving both reasoning quality and efficiency under budget constraints.

Xuanming Zhang, Shwan Ashrafi, Aziza Mirsaidova, Amir Rezaeian, Miguel Ballesteros, Lydia B. Chilton, Zhou Yu, Dan Roth• 2026

Related benchmarks

TaskDatasetResultRank
Macro-average ReasoningOverall NaturalPlan AIME 2024 GPQA
Final Score (Macro-Avg)96.5
28
Math ReasoningAIME 2024
Final Accuracy100
28
Trip PlanningNaturalPlan Trip 2024
Final CSR90.7
28
Scientific QAGPQA Diamond
Final Accuracy98.9
28
Showing 4 of 4 rows

Other info

Follow for update