Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Large Language Models as Analogical Reasoners

About

Chain-of-thought (CoT) prompting for language models demonstrates impressive performance across reasoning tasks, but typically needs labeled exemplars of the reasoning process. In this work, we introduce a new prompting approach, analogical prompting, designed to automatically guide the reasoning process of large language models. Inspired by analogical reasoning, a cognitive process in which humans draw from relevant past experiences to tackle new problems, our approach prompts language models to self-generate relevant exemplars or knowledge in the context, before proceeding to solve the given problem. This method presents several advantages: it obviates the need for labeling or retrieving exemplars, offering generality and convenience; it can also tailor the generated exemplars and knowledge to each problem, offering adaptability. Experimental results show that our approach outperforms 0-shot CoT and manual few-shot CoT in a variety of reasoning tasks, including math problem solving in GSM8K and MATH, code generation in Codeforces, and other reasoning tasks in BIG-Bench.

Michihiro Yasunaga, Xinyun Chen, Yujia Li, Panupong Pasupat, Jure Leskovec, Percy Liang, Ed H. Chi, Denny Zhou• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy90.7
900
ReasoningBBH--
672
Mathematical ReasoningGSM8K
Accuracy77.8
499
Code GenerationMBPP (test)--
298
Arithmetic ReasoningGSM8K
Accuracy87.6
173
Commonsense Question AnsweringCSQA (test)
Accuracy0.708
127
Commonsense ReasoningCSQA
CSQA Accuracy81
126
Arithmetic ReasoningADDSUB
Accuracy93.9
123
Math ReasoningAQUA
Accuracy86.6
78
Long-context ReasoningLongBench
Score53.4
62
Showing 10 of 15 rows

Other info

Follow for update