Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Small Language Models Need Strong Verifiers to Self-Correct Reasoning

About

Self-correction has emerged as a promising solution to boost the reasoning performance of large language models (LLMs), where LLMs refine their solutions using self-generated critiques that pinpoint the errors. This work explores whether small (<= 13B) language models (LMs) have the ability of self-correction on reasoning tasks with minimal inputs from stronger LMs. We propose a novel pipeline that prompts smaller LMs to collect self-correction data that supports the training of self-refinement abilities. First, we leverage correct solutions to guide the model in critiquing their incorrect responses. Second, the generated critiques, after filtering, are used for supervised fine-tuning of the self-correcting reasoner through solution refinement. Our experimental results show improved self-correction abilities of two models on five datasets spanning math and commonsense reasoning, with notable performance gains when paired with a strong GPT-4-based verifier, though limitations are identified when using a weak self-verifier for determining when to correct.

Yunxiang Zhang, Muhammad Khalifa, Lajanugen Logeswaran, Jaekyeom Kim, Moontae Lee, Honglak Lee, Lu Wang• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy47.4
751
Commonsense Question AnsweringCSQA (test)
Accuracy0.862
127
Mathematical ReasoningMATH Subset
Accuracy44.2
12
Commonsense Question AnsweringQASC
Accuracy72.8
4
Commonsense Question AnsweringRiddleSense
Accuracy67.3
4
Showing 5 of 5 rows

Other info

Code

Follow for update