Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Chain-of-Thought Prompting Elicits Reasoning in Large Language Models

About

We explore how generating a chain of thought -- a series of intermediate reasoning steps -- significantly improves the ability of large language models to perform complex reasoning. In particular, we show how such reasoning abilities emerge naturally in sufficiently large language models via a simple method called chain of thought prompting, where a few chain of thought demonstrations are provided as exemplars in prompting. Experiments on three large language models show that chain of thought prompting improves performance on a range of arithmetic, commonsense, and symbolic reasoning tasks. The empirical gains can be striking. For instance, prompting a 540B-parameter language model with just eight chain of thought exemplars achieves state of the art accuracy on the GSM8K benchmark of math word problems, surpassing even finetuned GPT-3 with a verifier.

Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Brian Ichter, Fei Xia, Ed Chi, Quoc Le, Denny Zhou• 2022

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy55.89
1891
Mathematical ReasoningGSM8K
Accuracy95.1
1362
Node ClassificationCora
Accuracy63.02
1215
Commonsense ReasoningWinoGrande
Accuracy63.6
1085
Code GenerationHumanEval
Pass@189.84
1036
Question AnsweringARC Challenge
Accuracy81.06
906
Mathematical ReasoningGSM8K (test)
Accuracy95.2
900
Mathematical ReasoningMATH
Accuracy85.4
882
Multi-task Language UnderstandingMMLU
Accuracy78.43
876
Language UnderstandingMMLU
Accuracy83.01
825
Showing 10 of 1520 rows
...

Other info

Follow for update