One-Shot Learning as Instruction Data Prospector for Large Language Models
About
Contemporary practices in instruction tuning often hinge on enlarging data scaling without a clear strategy for ensuring data quality, inadvertently introducing noise that may compromise model performance. To address this challenge, we introduce \textsc{Nuggets}, a novel and efficient methodology that leverages one-shot learning to discern and select high-quality instruction data from extensive datasets. \textsc{Nuggets} assesses the potential of individual instruction examples to act as effective one-shot learning instances, thereby identifying those that can significantly improve performance across diverse tasks. \textsc{Nuggets} utilizes a scoring system based on the impact of candidate examples on the perplexity of a diverse anchor set, facilitating the selection of the most advantageous data for instruction tuning. Through comprehensive evaluations on two benchmarks, including MT-Bench and Alpaca-Eval, we show that instruction tuning with the top 1\% of examples curated by \textsc{Nuggets} substantially outperforms conventional methods employing the entire dataset.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Language Modeling | WikiText2 | Perplexity9.45 | 1875 | |
| Language Modeling | WikiText-2 (test) | PPL18.27 | 1541 | |
| Commonsense Reasoning | HellaSwag | Accuracy76.72 | 1460 | |
| Language Modeling | WikiText-2 | Perplexity (PPL)12.76 | 841 | |
| Commonsense Reasoning | WinoGrande | Accuracy69.24 | 776 | |
| Language Modeling | PTB | Perplexity18.02 | 650 | |
| Commonsense Reasoning | PIQA | Accuracy78.34 | 647 | |
| Language Modeling | PTB (test) | Perplexity30.9 | 471 | |
| Question Answering | OBQA | Accuracy40.2 | 276 | |
| Question Answering | ARC-E | Accuracy68.06 | 242 |