Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

DS$^2$-Instruct: Domain-Specific Data Synthesis for Large Language Models Instruction Tuning

About

Adapting Large Language Models (LLMs) to specialized domains requires high-quality instruction tuning datasets, which are expensive to create through human annotation. Existing data synthesis methods focus on general-purpose tasks and fail to capture domain-specific terminology and reasoning patterns. To address this, we introduce DS$^2$-Instruct, a zero-shot framework that generates domain-specific instruction datasets without human supervision. Our approach first generates task-informed keywords to ensure comprehensive domain coverage. It then creates diverse instructions by pairing these keywords with different cognitive levels from Bloom's Taxonomy. Finally, it uses self-consistency validation to ensure data quality. We apply this framework to generate datasets across seven challenging domains, such as mathematics, finance, and logical reasoning. Comprehensive evaluation demonstrates that models fine-tuned on our generated data achieve substantial improvements over existing data generation methods.

Ruiyao Xu, Noelle I. Samia, Han Liu• 2026

Related benchmarks

TaskDatasetResultRank
Multiple-choice Question AnsweringMedQA
Accuracy50.98
39
Problem-SolvingGSM8K
Exact Match Accuracy78.94
20
Question AnsweringLogiQA
Accuracy44.29
17
Multiple-choice Question AnsweringCFA
Accuracy (%)58.34
15
Multiple-choice Question AnsweringPubMedQA
Accuracy63.62
15
Multiple-choice Question AnsweringGPQA
Accuracy (%)30.35
15
Problem-SolvingMATH
Exact Match (%)60.12
15
Showing 7 of 7 rows

Other info

Follow for update