Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Low-Confidence Gold: Refining Low-Confidence Samples for Efficient Instruction Tuning

About

The effectiveness of instruction fine-tuning for Large Language Models is fundamentally constrained by the quality and efficiency of training datasets. This work introduces Low-Confidence Gold (LCG), a novel filtering framework that employs centroid-based clustering and confidence-guided selection for identifying valuable instruction pairs. Through a semi-supervised approach using a lightweight classifier trained on representative samples, LCG curates high-quality subsets while preserving data diversity. Experimental evaluation demonstrates that models fine-tuned on LCG-filtered subsets of 6K samples achieve superior performance compared to existing methods, with substantial improvements on MT-bench and consistent gains across comprehensive evaluation metrics. The framework's efficacy while maintaining model performance establishes a promising direction for efficient instruction tuning.All open-source assets are publicly available at https://github.com/Lizruletheworld/Low-Confidence_Gold.

Hongyi Cai, Jie Li, Mohammad Mahdinur Rahman, Wenzhen Dong• 2025

Related benchmarks

TaskDatasetResultRank
Multi-turn Instruction FollowingMT-Bench
MT-Bench Score (GPT-4)5.086
44
General Language Understanding and ReasoningHuggingFace Open LLM Leaderboard
HellaSwag Accuracy62
20
Instruction Tuning EvaluationARC, GSM8k, HellaSwag, MMLU (test val)
ARC Accuracy52.31
7
Showing 3 of 3 rows

Other info

Follow for update