Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RECOST: External Knowledge Guided Data-efficient Instruction Tuning

About

In the current landscape of large language models (LLMs), the process of instruction tuning serves as an essential step. Considering the high computing power overhead, data-efficient instruction tuning was proposed to reduce the training data size in this process, aiming at selecting high-quality instructional data. Nevertheless, we argue that most current data-efficient instruction-tuning methods are highly dependent on the quality of the original instruction-tuning dataset. When it comes to datasets synthesized by LLMs, a common scenario in this field, dirty samples will even be selected with a higher probability than other samples. To address these challenges, we utilized external knowledge (relevant examples or paragraphs) to evaluate those samples synthesized by LLMs with an in-context-based relative predictive entropy. Based on the new metric, we proposed a framework, dubbed as \textbf{RECOST}, which integrates external-knowledge-base re-ranking and diversity-consistent sampling into a single pipeline. Through extensive experiments on several synthetic datasets (Alpaca and Alpaca-gpt4), we demonstrate the effectiveness of our method and achieve even better results with only \textbf{1\%} of the full dataset.

Qi Zhang, Yiming Zhang, Haobo Wang, Junbo Zhao• 2024

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval
Win Rate39.19
125
Instruction FollowingAlpacaEval v1 (test)
AlpacaEval Score63.35
14
Natural Language UnderstandingOpen LLM Leaderboard (test)
ARC57.68
13
General Language UnderstandingOpen LLM Leaderboard
Average Score56.07
7
Showing 4 of 4 rows

Other info

Follow for update