Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Synthetic Data Generation for Training Diversified Commonsense Reasoning Models

About

Conversational agents are required to respond to their users not only with high quality (i.e. commonsense bearing) responses, but also considering multiple plausible alternative scenarios, reflecting the diversity in their responses. Despite the growing need to train diverse commonsense generators, the progress of this line of work has been significantly hindered by the lack of large-scale high-quality diverse commonsense training datasets. Due to the high annotation costs, existing Generative Commonsense Reasoning (GCR) datasets are created using a small number of human annotators, covering only a narrow set of commonsense scenarios. To address this training resource gap, we propose a two-stage method to create the first-ever synthetic dataset CommonSyn for diversified (GCR). The model fine-tuned on our synthetic data jointly increase both generation diversity and quality compared with vanilla models and the model fine-tuned on human-crafted dataset across different size Large Language Models (LLMs)

Tianhui Zhang, Bei Peng, Danushka Bollegala• 2026

Related benchmarks

TaskDatasetResultRank
Question AnsweringPIQA
Accuracy71.3
374
Sentence GenerationCommonGen
Win-Tie Score53.6
18
Question AnsweringCSQA
Accuracy70.8
10
Abductive Natural Language Generationα-NLG
Win-Tie Score61.1
3
Generative Commonsense ReasoningComVE
Win-Tie Score78.9
3
Question AnsweringCSQA 2
Accuracy49
3
Showing 6 of 6 rows

Other info

Follow for update