Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GPT3Mix: Leveraging Large-scale Language Models for Text Augmentation

About

Large-scale language models such as GPT-3 are excellent few-shot learners, allowing them to be controlled via natural text prompts. Recent studies report that prompt-based direct classification eliminates the need for fine-tuning but lacks data and inference scalability. This paper proposes a novel data augmentation technique that leverages large-scale language models to generate realistic text samples from a mixture of real samples. We also propose utilizing soft-labels predicted by the language models, effectively distilling knowledge from the large-scale language models and creating textual perturbations simultaneously. We perform data augmentation experiments on diverse classification tasks and show that our method hugely outperforms existing text augmentation methods. Ablation studies and a qualitative analysis provide more insights into our approach.

Kang Min Yoo, Dongju Park, Jaewook Kang, Sang-Woo Lee, Woomyeong Park• 2021

Related benchmarks

TaskDatasetResultRank
Sequence ClassificationMASSIVE
Micro F175.26
64
Sequence ClassificationIMDB
Micro F187.69
64
Sequence ClassificationYahoo
Micro F152.93
64
Sequence ClassificationATIS
Micro F185.36
64
Sequence ClassificationHuffpost low-resource (test)
Micro F169.46
64
Text ClassificationNon-Class -> Class
Accuracy55.81
10
Text ClassificationClass -> Class
Accuracy0.5162
10
Showing 7 of 7 rows

Other info

Follow for update