Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

When Do Symbolic Solvers Enhance Reasoning in Large Language Models?

About

Large Reasoning Models (LRMs) achieve strong performance on complex reasoning tasks by generating long Chains of Thought (CoTs). However, this paradigm might incur substantial token overhead, especially when models "overthink" by producing lengthy reasoning chains, which can even lead to incorrect answers. A promising direction is the symbolic-solver-integrated approach, which leverages the code generation capabilities of LLMs to translate reasoning tasks into executable code and then solve them with a symbolic solver. In this paper, we explore an open question of when the conventional long-CoT can be enhanced by symbolic solvers. Our experimental results show that the symbolic-solver-integrated method only helps when the problem requires limited implicit reasoning but involves an ample search space. The latest LLMs, like GPT-4o, show better performance on deductive problems with shallow reasoning depth, while the symbolic-solver-integrated method significantly improves the LLMs' performance in constraint satisfaction problems that require repeated backtracks. When a declarative exemplar is provided, even CodeLlama-13B can outperform GPT-4o in difficult Zebra puzzles.

Zhiyuan He, Dingmin Wang• 2025

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM-Hard
Solve Rate78
162
Arithmetic ReasoningGSM8K
Accuracy94.6
155
Constraint Satisfaction ReasoningZebraLogic
Easy Score96.8
9
Arithmetic ReasoningGSM Reversed
Accuracy90.3
7
Entailment ReasoningEntailmentBank
Accuracy81.8
2
Showing 5 of 5 rows

Other info

Follow for update