Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AlpaGasus: Training A Better Alpaca with Fewer Data

About

Large language models (LLMs) strengthen instruction-following capability through instruction-finetuning (IFT) on supervised instruction/response data. However, widely used IFT datasets (e.g., Alpaca's 52k data) surprisingly contain many low-quality instances with incorrect or irrelevant responses, which are misleading and detrimental to IFT. In this paper, we propose a simple and effective data selection strategy that automatically identifies and filters out low-quality data using a strong LLM (e.g., ChatGPT). To this end, we introduce AlpaGasus, which is finetuned on only 9k high-quality data filtered from the 52k Alpaca data. AlpaGasus significantly outperforms the original Alpaca as evaluated by GPT-4 on multiple test sets and the controlled human evaluation. Its 13B variant matches $>90\%$ performance of its teacher LLM (i.e., Text-Davinci-003 generating the 52k data) on test tasks. It also provides 5.7x faster training, reducing the training time for a 7B variant from 80 minutes (for Alpaca) to 14 minutes. Moreover, the experiments prove the efficacy of our method across diverse datasets, base models, and LLM filters. Overall, AlpaGasus demonstrates a novel data-centric IFT paradigm that can be generally applied to instruction-tuning data, leading to faster training and better instruction-following models. Our project page is available at: https://lichang-chen.github.io/AlpaGasus/

Lichang Chen, Shiyang Li, Jun Yan, Hai Wang, Kalpa Gunaratna, Vikas Yadav, Zheng Tang, Vijay Srinivasan, Tianyi Zhou, Heng Huang, Hongxia Jin• 2023

Related benchmarks

TaskDatasetResultRank
Multimodal EvaluationMME--
557
Mathematical ReasoningMathVista
Score23.9
322
Science Question AnsweringARC Challenge
Accuracy56.4
234
Science Question AnsweringScienceQA--
229
Multimodal UnderstandingSEED-Bench--
203
Multimodal EvaluationMMBench
MMB Score34.71
118
Question AnsweringARC Challenge
Normalized Accuracy49.91
48
Hallucination and Visual Reasoning EvaluationHallusionBench--
37
General Language ModelingMMLU, ARC-Challenge, and CommonsenseQA Aggregate
Average Score64.19
24
Language UnderstandingMMLU
MMLU Score65.18
24
Showing 10 of 13 rows

Other info

Follow for update