Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

One-Shot Learning as Instruction Data Prospector for Large Language Models

About

Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce \textsc{Nuggets}, a novel and efficient methodology that leverages one-shot learning to discern and select high-quality instruction data from extensive datasets. \textsc{Nuggets} assesses the potential of individual instruction examples to act as effective one-shot learning instances, thereby identifying those that can significantly improve performance across diverse tasks. \textsc{Nuggets} utilizes a scoring system based on the impact of candidate examples on the perplexity of a diverse anchor set, facilitating the selection of the most advantageous data for instruction tuning. Through comprehensive evaluations on two benchmarks, including MT-Bench and Alpaca-Eval, we show that instruction tuning with the top 1\% of examples curated by \textsc{Nuggets} substantially outperforms conventional methods employing the entire dataset.

Yunshui Li, Binyuan Hui, Xiaobo Xia, Jiaxi Yang, Min Yang, Lei Zhang, Shuzheng Si, Ling-Hao Chen, Junhao Liu, Tongliang Liu, Fei Huang, Yongbin Li• 2023

Related benchmarks

TaskDatasetResultRank
Language ModelingWikiText2
Perplexity9.45
2839
Language ModelingWikiText-2 (test)
PPL18.27
1949
Commonsense ReasoningHellaSwag
Accuracy76.72
1891
Language ModelingWikiText-2
Perplexity (PPL)12.76
1624
Commonsense ReasoningWinoGrande
Accuracy69.24
1085
Language ModelingPTB
Perplexity18.02
1034
Commonsense ReasoningPIQA
Accuracy78.34
751
Language ModelingPTB (test)
Perplexity30.9
526
Question AnsweringARC-E
Accuracy68.06
416
Question AnsweringBoolQ
Accuracy69.74
317
Showing 10 of 45 rows

Other info

Follow for update