Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Achieving >97% on GSM8K: Deeply Understanding the Problems Makes LLMs Better Solvers for Math Word Problems

About

Chain-of-Thought (CoT) prompting has enhanced the performance of Large Language Models (LLMs) across various reasoning tasks. However, CoT still falls short in dealing with complex math word problems, as it usually suffers from three pitfalls: semantic misunderstanding errors, calculation errors, and step-missing errors. Prior studies involve addressing the calculation errors and step-missing errors, but neglect the semantic misunderstanding errors, which is the major factor limiting the reasoning performance of LLMs. To this end, we propose a simple-yet-effective method, namely Deeply Understanding the Problems (DUP), to improve the LLMs' math problem-solving ability by addressing semantic misunderstanding errors. The core of our method is to encourage the LLMs to deeply understand the problems and extract the key problem-solving information used for better reasoning. Extensive experiments on 10 diverse reasoning benchmarks show that our DUP method consistently outperforms the other counterparts by a large margin. More encouragingly, DUP achieves a new SOTA result on the GSM8K benchmark, with an accuracy of 97.1% under the zero-shot setting.

Qihuang Zhong, Kang Wang, Ziyang Xu, Juhua Liu, Liang Ding, Bo Du• 2024

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCSQA
Accuracy74.5
366
Arithmetic ReasoningMultiArith
Accuracy98.1
181
Arithmetic ReasoningGSM8K
Accuracy97.1
155
Commonsense ReasoningStrategyQA
Accuracy68.5
125
Arithmetic ReasoningADDSUB
Accuracy95.1
76
Arithmetic ReasoningSVAMP
Accuracy94.2
48
Arithmetic ReasoningSINGLEEQ
Accuracy96
43
Arithmetic ReasoningAQUA
Accuracy77.1
31
Symbolic ReasoningLastLetter (test)
Accuracy81.2
11
Arithmetic ReasoningSVAMP, GSM8K, AddSub, MultiArith, AQUA, SingleEq
Average Score92.9
10
Showing 10 of 11 rows

Other info

Follow for update