Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Instruction Data Selection via Answer Divergence

About

Instruction tuning relies on large instruction-response corpora whose quality and composition strongly affect downstream performance. We propose Answer Divergence-Guided Selection (ADG), which selects instruction data based on the geometric structure of multi-sample outputs. ADG draws several high-temperature generations per instruction, maps responses into an embedding space, and computes an output divergence score that jointly encodes dispersion magnitude and shape anisotropy. High scores correspond to instructions whose answers are both far apart and multi-modal, rather than clustered paraphrases along a single direction. Across two backbones and three public instruction pools, fine-tuning on only 10K ADG-selected examples consistently outperforms strong selectors on six benchmarks spanning reasoning, knowledge, and coding. Analyses further show that both dispersion magnitude and shape anisotropy are necessary, supporting answer divergence as a practical signal for instruction data selection. Code and appendix are included in the supplementary materials.

Bo Li, Mingda Wang, Shikun Zhang, Wei Ye• 2026

Related benchmarks

TaskDatasetResultRank
General CapabilityBBH, GSM8K, MMLU, TruthfulQA, HumanEval, MBPP
Average Score26.77
30
KnowledgeMMLU, TruthfulQA
MMLU36.1
30
CodingHumanEval, MBPP
HumanEval Score20.73
30
ReasoningBBH, GSM8K
BBH Score32.13
30
Instruction TuningAlpaca GPT4
Reasoning75.43
20
Instruction TuningWizardLM
Reasoning Score75.07
20
Instruction TuningCoT
Reasoning Score70.55
20
KnowledgeC-Eval
Score45.99
17
KnowledgeCMMLU
Knowledge Score45.09
16
Showing 9 of 9 rows

Other info

Follow for update