Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

From Explicit CoT to Implicit CoT: Learning to Internalize CoT Step by Step

About

When leveraging language models for reasoning tasks, generating explicit chain-of-thought (CoT) steps often proves essential for achieving high accuracy in final outputs. In this paper, we investigate if models can be taught to internalize these CoT steps. To this end, we propose a simple yet effective method for internalizing CoT steps: starting with a model trained for explicit CoT reasoning, we gradually remove the intermediate steps and finetune the model. This process allows the model to internalize the intermediate reasoning steps, thus simplifying the reasoning process while maintaining high performance. Our approach enables a GPT-2 Small model to solve 9-by-9 multiplication with up to 99% accuracy, whereas standard training cannot solve beyond 4-by-4 multiplication. Furthermore, our method proves effective on larger language models, such as Mistral 7B, achieving over 50% accuracy on GSM8K without producing any intermediate steps.

Yuntian Deng, Yejin Choi, Stuart Shieber• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy14.81
983
Mathematical ReasoningSVAMP
Accuracy36.4
368
Mathematical ReasoningMATH
Accuracy15
162
Mathematical ReasoningGSM-Hard
Solve Rate3.87
162
Mathematical ReasoningMultiArith
Accuracy38.2
116
Scientific ReasoningGPQA
Accuracy39.6
50
Mathematical ReasoningGSM8k Aug
Accuracy19.8
35
Math ReasoningGSM-Hard
Accuracy3.87
31
Mathematical Reasoninggsm
Accuracy46.7
27
ReasoningProsQA
Acc99.2
26
Showing 10 of 14 rows

Other info

Follow for update