Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Teaching Small Language Models to Reason

About

Chain of thought prompting successfully improves the reasoning capabilities of large language models, achieving state of the art results on a range of datasets. However, these reasoning capabilities only appear to emerge in models with a size of over 100 billion parameters. In this paper, we explore the transfer of such reasoning capabilities to models with less than 100 billion parameters via knowledge distillation. Specifically, we finetune a student model on the chain of thought outputs generated by a larger teacher model. Our experiments show that the proposed method improves task performance across arithmetic, commonsense and symbolic reasoning datasets. For example, the accuracy of T5 XXL on GSM8K improves from 8.11% to 21.99% when finetuned on PaLM-540B generated chains of thought.

Lucie Charlotte Magister, Jonathan Mallinson, Jakub Adamek, Eric Malmi, Aliaksei Severyn• 2022

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K
Accuracy38.2
983
Mathematical ReasoningSVAMP
Accuracy40.2
368
Mathematical ReasoningGSM8K
Accuracy71.5
351
Arithmetic ReasoningGSM8K (test)
Accuracy21.99
129
Mathematical ReasoningMathQA
Accuracy25.3
95
Commonsense ReasoningStrategyQA (test)
Accuracy63.77
81
Mathematical ReasoningGSM8K original (test)
Accuracy21.99
44
Mathematical ReasoningMATH 500
Acc26.6
40
Mathematical ReasoningSVAMP
Accuracy82
14
Arithmetic ReasoningMAWPS (5-fold cross val)
Accuracy70.41
10
Showing 10 of 15 rows

Other info

Follow for update