Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Data Augmentation using Pre-trained Transformer Models

About

Language model based pre-trained models such as BERT have provided significant gains across different NLP tasks. In this paper, we study different types of transformer based pre-trained models such as auto-regressive models (GPT-2), auto-encoder models (BERT), and seq2seq models (BART) for conditional data augmentation. We show that prepending the class labels to text sequences provides a simple yet effective way to condition the pre-trained models for data augmentation. Additionally, on three classification benchmarks, pre-trained Seq2Seq model outperforms other data augmentation methods in a low-resource setting. Further, we explore how different pre-trained model based data augmentation differs in-terms of data diversity, and how well such methods preserve the class-label information.

Varun Kumar, Ashutosh Choudhary, Eunah Cho• 2020

Related benchmarks

TaskDatasetResultRank
Few-shot Text Classification26 few-shot tasks Class -> Non-Class transfer setting (test)
Accuracy43.69
84
Few-shot Text Classification26 few-shot tasks Random -> Random transfer setting (test)
Accuracy44.44
84
Few-shot Text Classification26 few-shot tasks Non-Class -> Class transfer setting (test)
Accuracy0.4687
84
Few-shot Text Classification26 few-shot tasks Class -> Class transfer setting (test)
Accuracy44.88
84
Sentiment ClassificationLaptop14
Accuracy79.08
28
Showing 5 of 5 rows

Other info

Follow for update