Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Least-to-Most Prompting Enables Complex Reasoning in Large Language Models

About

Chain-of-thought prompting has demonstrated remarkable performance on various natural language reasoning tasks. However, it tends to perform poorly on tasks which requires solving problems harder than the exemplars shown in the prompts. To overcome this challenge of easy-to-hard generalization, we propose a novel prompting strategy, least-to-most prompting. The key idea in this strategy is to break down a complex problem into a series of simpler subproblems and then solve them in sequence. Solving each subproblem is facilitated by the answers to previously solved subproblems. Our experimental results on tasks related to symbolic manipulation, compositional generalization, and math reasoning reveal that least-to-most prompting is capable of generalizing to more difficult problems than those seen in the prompts. A notable finding is that when the GPT-3 code-davinci-002 model is used with least-to-most prompting, it can solve the compositional generalization benchmark SCAN in any split (including length split) with an accuracy of at least 99% using just 14 exemplars, compared to only 16% accuracy with chain-of-thought prompting. This is particularly noteworthy because neural-symbolic models in the literature that specialize in solving SCAN are trained on the entire training set containing over 15,000 examples. We have included prompts for all the tasks in the Appendix.

Denny Zhou, Nathanael Sch\"arli, Le Hou, Jason Wei, Nathan Scales, Xuezhi Wang, Dale Schuurmans, Claire Cui, Olivier Bousquet, Quoc Le, Ed Chi• 2022

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy37.8
1460
Mathematical ReasoningGSM8K (test)
Accuracy68.01
797
Commonsense ReasoningPIQA
Accuracy62.8
647
Commonsense ReasoningCSQA
Accuracy71.9
366
Multi-hop Question Answering2WikiMultihopQA
EM31.3
278
Commonsense ReasoningWinoGrande
Accuracy59.7
231
Multi-hop Question AnsweringHotpotQA--
221
Multi-hop Question AnsweringHotpotQA (test)--
198
Arithmetic ReasoningMultiArith
Accuracy97.1
181
Arithmetic ReasoningGSM8K
Accuracy92.1
155
Showing 10 of 92 rows
...

Other info

Follow for update