Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Self-Instruct: Aligning Language Models with Self-Generated Instructions

About

Large "instruction-tuned" language models (i.e., finetuned to respond to instructions) have demonstrated a remarkable ability to generalize zero-shot to new tasks. Nevertheless, they depend heavily on human-written instruction data that is often limited in quantity, diversity, and creativity, therefore hindering the generality of the tuned model. We introduce Self-Instruct, a framework for improving the instruction-following capabilities of pretrained language models by bootstrapping off their own generations. Our pipeline generates instructions, input, and output samples from a language model, then filters invalid or similar ones before using them to finetune the original model. Applying our method to the vanilla GPT3, we demonstrate a 33% absolute improvement over the original model on Super-NaturalInstructions, on par with the performance of InstructGPT-001, which was trained with private user data and human annotations. For further evaluation, we curate a set of expert-written instructions for novel tasks, and show through human evaluation that tuning GPT3 with Self-Instruct outperforms using existing public instruction datasets by a large margin, leaving only a 5% absolute gap behind InstructGPT-001. Self-Instruct provides an almost annotation-free method for aligning pre-trained language models with instructions, and we release our large synthetic dataset to facilitate future studies on instruction tuning. Our code and data are available at https://github.com/yizhongw/self-instruct.

Yizhong Wang, Yeganeh Kordi, Swaroop Mishra, Alisa Liu, Noah A. Smith, Daniel Khashabi, Hannaneh Hajishirzi• 2022

Related benchmarks

TaskDatasetResultRank
Language UnderstandingMMLU
Accuracy70.2
756
Mathematical ReasoningMATH
Accuracy7.13
643
Code GenerationMBPP (test)--
276
Code GenerationMBPP
Pass@136.27
175
Mathematical ReasoningGSM8K
Math Score50.09
171
Code GenerationHumanEval 1.0 (test)
Pass@10.927
145
Code GenerationHumanEval
Pass@125.65
108
Instruction FollowingDollyEval
Score36.38
106
Table Question AnsweringWTQ
Accuracy13.77
101
Agentic Reasoning∞Bench
Score53.41
100
Showing 10 of 33 rows

Other info

Follow for update