Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Superfiltering: Weak-to-Strong Data Filtering for Fast Instruction-Tuning

About

Instruction tuning is critical to improve LLMs but usually suffers from low-quality and redundant data. Data filtering for instruction tuning has proved important in improving both the efficiency and performance of the tuning process. But it also leads to extra cost and computation due to the involvement of LLMs in this process. To reduce the filtering cost, we study Superfiltering: Can we use a smaller and weaker model to select data for finetuning a larger and stronger model? Despite the performance gap between weak and strong language models, we find their highly consistent capability to perceive instruction difficulty and data selection results. This enables us to use a much smaller and more efficient model to filter the instruction data used to train a larger language model. Not only does it largely speed up the data filtering, but the filtered-data-finetuned LLM achieves even better performance on standard benchmarks. Extensive experiments validate the efficacy and efficiency of our approach.

Ming Li, Yong Zhang, Shwai He, Zhitao Li, Hongyu Zhao, Jianzong Wang, Ning Cheng, Tianyi Zhou• 2024

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningGSM8K (test)
Accuracy7.6
797
Mathematical ReasoningMATH 500
Accuracy63.4
155
Financial Question AnsweringFiQA
Accuracy31.5
85
Medical Knowledge Question AnsweringMedical Domain (MedQA, MMLU, MedMCQA) (test)
MedQA Score41.63
45
Language UnderstandingAggregate ARC-C, MMLU, HellaSwag, TruthfulQA (test)
Total Score142
22
Instruction TuningAlpaca instruction-tuning 52k
Pairwise Winning Score110
19
Instruction FollowingGeneral Domain AlpacaEval Arena-Hard LLaMA3-8B (10% selection)
AlpacaEval Score12.08
18
Math problem solvingMath Domain (AIME24, Math-OAI, Minerva, Olympiad, ACM23) Qwen2.5-7B (10% selection)
AIME24 Score4.8
18
Code GenerationCode Domain HumanEval, HumanEval+, MBPP, MBPP+, Bigcode (test)
HumanEval43.3
18
Budgeted subset selectionDolly 15% retention (train)
SUM140.6
6
Showing 10 of 10 rows

Other info

Follow for update