Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Symbolic Chain-of-Thought Distillation: Small Models Can Also "Think" Step-by-Step

About

Chain-of-thought prompting (e.g., "Let's think step-by-step") primes large language models to verbalize rationalization for their predictions. While chain-of-thought can lead to dramatic performance gains, benefits appear to emerge only for sufficiently large models (beyond 50B parameters). We show that orders-of-magnitude smaller models (125M -- 1.3B parameters) can still benefit from chain-of-thought prompting. To achieve this, we introduce Symbolic Chain-of-Thought Distillation (SCoTD), a method to train a smaller student model on rationalizations sampled from a significantly larger teacher model. Experiments across several commonsense benchmarks show that: 1) SCoTD enhances the performance of the student model in both supervised and few-shot settings, and especially for challenge sets; 2) sampling many reasoning chains per instance from the teacher is paramount; and 3) after distillation, student chain-of-thoughts are judged by humans as comparable to the teacher, despite orders of magnitude fewer parameters. We test several hypotheses regarding what properties of chain-of-thought samples are important, e.g., diversity vs. teacher likelihood vs. open-endedness. We release our corpus of chain-of-thought samples and code.

Liunian Harold Li, Jack Hessel, Youngjae Yu, Xiang Ren, Kai-Wei Chang, Yejin Choi• 2023

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy86.2
499
Mathematical ReasoningMATH
Accuracy74.88
338
Common Sense ReasoningBoolQ
Accuracy74.62
212
Mathematical ReasoningTabMWP
Accuracy94.17
188
Commonsense ReasoningCSQA
CSQA Accuracy73.12
126
ReasoningOpenBookQA
Accuracy79.8
77
Natural Language InferenceaNLI
Accuracy62.73
65
Question AnsweringARC-C
Accuracy87.26
54
Question AnsweringSQA
Accuracy73.51
24
ReasoningDate
Accuracy on Date75.1
24
Showing 10 of 12 rows

Other info

Follow for update