DS$^2$-Instruct: Domain-Specific Data Synthesis for Large Language Models Instruction Tuning
About
Adapting Large Language Models (LLMs) to specialized domains requires high-quality instruction tuning datasets, which are expensive to create through human annotation. Existing data synthesis methods focus on general-purpose tasks and fail to capture domain-specific terminology and reasoning patterns. To address this, we introduce DS$^2$-Instruct, a zero-shot framework that generates domain-specific instruction datasets without human supervision. Our approach first generates task-informed keywords to ensure comprehensive domain coverage. It then creates diverse instructions by pairing these keywords with different cognitive levels from Bloom's Taxonomy. Finally, it uses self-consistency validation to ensure data quality. We apply this framework to generate datasets across seven challenging domains, such as mathematics, finance, and logical reasoning. Comprehensive evaluation demonstrates that models fine-tuned on our generated data achieve substantial improvements over existing data generation methods.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multiple-choice Question Answering | MedQA | Accuracy50.98 | 39 | |
| Problem-Solving | GSM8K | Exact Match Accuracy78.94 | 20 | |
| Question Answering | LogiQA | Accuracy44.29 | 17 | |
| Multiple-choice Question Answering | CFA | Accuracy (%)58.34 | 15 | |
| Multiple-choice Question Answering | PubMedQA | Accuracy63.62 | 15 | |
| Multiple-choice Question Answering | GPQA | Accuracy (%)30.35 | 15 | |
| Problem-Solving | MATH | Exact Match (%)60.12 | 15 |