Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Complexity-Based Prompting for Multi-Step Reasoning

About

We study the task of prompting large-scale language models to perform multi-step reasoning. Existing work shows that when prompted with a chain of thoughts (CoT), sequences of short sentences describing intermediate reasoning steps towards a final answer, large language models can generate new reasoning chains and predict answers for new inputs. A central question is which reasoning examples make the most effective prompts. In this work, we propose complexity-based prompting, a simple and effective example selection scheme for multi-step reasoning. We show that prompts with higher reasoning complexity, i.e., chains with more reasoning steps, achieve substantially better performance on multi-step reasoning tasks over strong baselines. We further extend our complexity-based criteria from prompting (selecting inputs) to decoding (selecting outputs), where we sample multiple reasoning chains from the model, then choose the majority of generated answers from complex reasoning chains (over simple chains). When used to prompt GPT-3 and Codex, our approach substantially improves multi-step reasoning accuracy and achieves new state-of-the-art (SOTA) performance on three math benchmarks (GSM8K, MultiArith, and MathQA) and two BigBenchHard tasks (Date Understanding and Penguins), with an average +5.3 and up to +18 accuracy improvements. Compared with existing example selection schemes like manual tuning or retrieval-based selection, selection based on reasoning complexity is intuitive, easy to implement, and annotation-efficient. Further results demonstrate the robustness of performance gains from complex prompts under format perturbation and distribution shift.

Yao Fu, Hao Peng, Ashish Sabharwal, Peter Clark, Tushar Khot• 2022

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy86.89
983
Code GenerationHumanEval
Pass@187.58
850
Multi-task Language UnderstandingMMLU
Accuracy81.05
842
Mathematical ReasoningGSM8K (test)
Accuracy81.4
751
Mathematical ReasoningMATH (test)
Overall Accuracy50.3
433
Mathematical ReasoningMATH500 (test)
Accuracy53.8
381
Mathematical ReasoningSVAMP
Accuracy90.53
368
Mathematical ReasoningGSM8K
Accuracy86.89
351
Mathematical ReasoningSVAMP (test)
Accuracy86.2
233
Arithmetic ReasoningMultiArith
Accuracy95.4
181
Showing 10 of 30 rows

Other info

Follow for update