Papers
arxiv:2603.12932

DS^2-Instruct: Domain-Specific Data Synthesis for Large Language Models Instruction Tuning

Published on Mar 16
Authors:
,
,

Abstract

A zero-shot framework called DS²-Instruct is presented for generating domain-specific instruction datasets without human supervision, utilizing task-informed keywords and Bloom's Taxonomy for cognitive level variation.

AI-generated summary

Adapting Large Language Models (LLMs) to specialized domains requires high-quality instruction tuning datasets, which are expensive to create through human annotation. Existing data synthesis methods focus on general-purpose tasks and fail to capture domain-specific terminology and reasoning patterns. To address this, we introduce DS^2-Instruct, a zero-shot framework that generates domain-specific instruction datasets without human supervision. Our approach first generates task-informed keywords to ensure comprehensive domain coverage. It then creates diverse instructions by pairing these keywords with different cognitive levels from Bloom's Taxonomy. Finally, it uses self-consistency validation to ensure data quality. We apply this framework to generate datasets across seven challenging domains, such as mathematics, finance, and logical reasoning. Comprehensive evaluation demonstrates that models fine-tuned on our generated data achieve substantial improvements over existing data generation methods.

Community

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2603.12932
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.12932 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.12932 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.12932 in a Space README.md to link it from this page.

Collections including this paper 1