Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Large Language Models Cannot Self-Correct Reasoning Yet

About

Large Language Models (LLMs) have emerged as a groundbreaking technology with their unparalleled text generation capabilities across various applications. Nevertheless, concerns persist regarding the accuracy and appropriateness of their generated content. A contemporary methodology, self-correction, has been proposed as a remedy to these issues. Building upon this premise, this paper critically examines the role and efficacy of self-correction within LLMs, shedding light on its true potential and limitations. Central to our investigation is the notion of intrinsic self-correction, whereby an LLM attempts to correct its initial responses based solely on its inherent capabilities, without the crutch of external feedback. In the context of reasoning, our research indicates that LLMs struggle to self-correct their responses without external feedback, and at times, their performance even degrades after self-correction. Drawing from these insights, we offer suggestions for future research and practical applications in this field.

Jie Huang, Xinyun Chen, Swaroop Mishra, Huaixiu Steven Zheng, Adams Wei Yu, Xinying Song, Denny Zhou• 2023

Related benchmarks

TaskDatasetResultRank
Text-to-SQLBIRD (dev)
Execution Accuracy (EA)55.02
251
Math ReasoningGSM8K (test)--
192
Text-to-SQLSpider (dev)--
100
Commonsense ReasoningCommonsense--
29
Mathematical ReasoningMath Base-9--
20
Knowledge Graph Question AnsweringWebQSP 1-hop
Accuracy45.5
16
Knowledge Graph Question AnsweringMetaQA 1-hop
Accuracy60.5
16
Knowledge Graph Question AnsweringWebQSP 3-hop
Accuracy0.165
16
Knowledge Graph Question AnsweringMetaQA 3-hop
Accuracy34.8
16
Fairness and Utility EvaluationFairness and Utility Benchmarks (BBQ, UnQover, CEB-Adult, CEB-Credit, CEB-Jigsaw, CrowS, ARC-C, GSM8K)
BBQ Accuracy97.1
8
Showing 10 of 13 rows

Other info

Follow for update