Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback

About

Recently, dataset-generation-based zero-shot learning has shown promising results by training a task-specific model with a dataset synthesized from large pre-trained language models (PLMs). The final task-specific model often achieves compatible or even better performance than PLMs under the zero-shot setting, with orders of magnitude fewer parameters. However, synthetic datasets have their drawbacks. They have long been suffering from low-quality issues (e.g., low informativeness and redundancy). This explains why the massive synthetic data does not lead to better performance -- a scenario we would expect in the human-labeled data. To improve the quality of dataset synthesis, we propose a progressive zero-shot dataset generation framework, ProGen, which leverages the feedback from the task-specific model to guide the generation of new training data via in-context examples. Extensive experiments on five text classification datasets demonstrate the effectiveness of the proposed approach. We also show ProGen achieves on-par or superior performance with only 1\% synthetic dataset size compared to baseline methods without in-context feedback.

Jiacheng Ye, Jiahui Gao, Jiangtao Feng, Zhiyong Wu, Tao Yu, Lingpeng Kong• 2022

Related benchmarks

TaskDatasetResultRank
Mathematical ReasoningSVAMP
Accuracy23.7
368
Sentiment ClassificationSST2 (test)
Accuracy84.12
214
Sentiment AnalysisSST-2
Accuracy87.2
156
Sentiment ClassificationIMDB (test)--
144
Topic ClassificationAG News (test)
Accuracy80.81
98
Sentiment AnalysisIMDB
Accuracy84.12
57
Question AnsweringSQuAD
Exact Match68.1
50
Sentiment ClassificationYelp (test)
Accuracy89.39
46
Sentiment AnalysisYelp
Accuracy89.39
30
Sentiment AnalysisRotten Tomato
Accuracy82.86
25
Showing 10 of 23 rows

Other info

Code

Follow for update