Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Plan-and-Solve Prompting: Improving Zero-Shot Chain-of-Thought Reasoning by Large Language Models

About

Large language models (LLMs) have recently been shown to deliver impressive performance in various NLP tasks. To tackle multi-step reasoning tasks, few-shot chain-of-thought (CoT) prompting includes a few manually crafted step-by-step reasoning demonstrations which enable LLMs to explicitly generate reasoning steps and improve their reasoning task accuracy. To eliminate the manual effort, Zero-shot-CoT concatenates the target problem statement with "Let's think step by step" as an input prompt to LLMs. Despite the success of Zero-shot-CoT, it still suffers from three pitfalls: calculation errors, missing-step errors, and semantic misunderstanding errors. To address the missing-step errors, we propose Plan-and-Solve (PS) Prompting. It consists of two components: first, devising a plan to divide the entire task into smaller subtasks, and then carrying out the subtasks according to the plan. To address the calculation errors and improve the quality of generated reasoning steps, we extend PS prompting with more detailed instructions and derive PS+ prompting. We evaluate our proposed prompting strategy on ten datasets across three reasoning problems. The experimental results over GPT-3 show that our proposed zero-shot prompting consistently outperforms Zero-shot-CoT across all datasets by a large margin, is comparable to or exceeds Zero-shot-Program-of-Thought Prompting, and has comparable performance with 8-shot CoT prompting on the math reasoning problem. The code can be found at https://github.com/AGI-Edgerunners/Plan-and-Solve-Prompting.

Lei Wang, Wanyu Xu, Yihuai Lan, Zhiqiang Hu, Yunshi Lan, Roy Ka-Wei Lee, Ee-Peng Lim• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningCSQA
Accuracy68.8
366
Multi-hop Question Answering2WikiMultihopQA--
278
Multi-hop Question AnsweringHotpotQA (test)--
198
Arithmetic ReasoningMultiArith
Accuracy98.1
181
Mathematical ReasoningGSM-Hard
Solve Rate72.86
162
Arithmetic ReasoningGSM8K
Accuracy94.3
155
Mathematical ReasoningAQUA
Accuracy35
132
Commonsense ReasoningStrategyQA
Accuracy77.1
125
Multi-hop Question AnsweringMuSiQue--
106
Arithmetic ReasoningADDSUB
Accuracy93.1
76
Showing 10 of 69 rows

Other info

Follow for update