Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SelectIT: Selective Instruction Tuning for LLMs via Uncertainty-Aware Self-Reflection

About

Instruction tuning (IT) is crucial to tailoring large language models (LLMs) towards human-centric interactions. Recent advancements have shown that the careful selection of a small, high-quality subset of IT data can significantly enhance the performance of LLMs. Despite this, common approaches often rely on additional models or data, which increases costs and limits widespread adoption. In this work, we propose a novel approach, termed SelectIT, that capitalizes on the foundational capabilities of the LLM itself. Specifically, we exploit the intrinsic uncertainty present in LLMs to more effectively select high-quality IT data, without the need for extra resources. Furthermore, we introduce a curated IT dataset, the Selective Alpaca, created by applying SelectIT to the Alpaca-GPT4 dataset. Empirical results demonstrate that IT using Selective Alpaca leads to substantial model ability enhancement. The robustness of SelectIT has also been corroborated in various foundation models and domain-specific tasks. Our findings suggest that longer and more computationally intensive IT data may serve as superior sources of IT, offering valuable insights for future research in this area. Data, code, and scripts are freely available at https://github.com/Blue-Raincoat/SelectIT.

Liangxin Liu, Xuebo Liu, Derek F. Wong, Dongfang Li, Ziyi Wang, Baotian Hu, Min Zhang• 2024

Related benchmarks

TaskDatasetResultRank
Medical Knowledge Question AnsweringMedical Domain (MedQA, MMLU, MedMCQA) (test)
MedQA Score46.11
45
Code GenerationCode Domain HumanEval, HumanEval+, MBPP, MBPP+, Bigcode (test)
HumanEval48.2
18
Instruction TuningIT Evaluation Suite MMLU, BBH, GSM, TydiQA, CodeX, AE
MMLU55.7
18
Math problem solvingMath Domain (AIME24, Math-OAI, Minerva, Olympiad, ACM23) Qwen2.5-7B (10% selection)
AIME24 Score4.15
18
Instruction FollowingGeneral Domain AlpacaEval Arena-Hard LLaMA3-8B (10% selection)
AlpacaEval Score7.84
18
Instruction Tuning EvaluationOpen Instruct Evaluation Suite (test)
MMLU61.2
12
Machine TranslationALL Average of two language pairs in four directions wmt22-comet-da
COMET84.2
12
Showing 7 of 7 rows

Other info

Code

Follow for update