Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Variational Pretraining for Semi-supervised Text Classification

About

We introduce VAMPIRE, a lightweight pretraining framework for effective text classification when data and computing resources are limited. We pretrain a unigram document model as a variational autoencoder on in-domain, unlabeled data and use its internal states as features in a downstream classifier. Empirically, we show the relative strength of VAMPIRE against computationally expensive contextual embeddings and other popular semi-supervised baselines under low resource settings. We also find that fine-tuning to in-domain data is crucial to achieving decent performance from contextual embeddings when working with limited supervision. We accompany this paper with code to pretrain and use VAMPIRE embeddings in downstream tasks.

Suchin Gururangan, Tam Dang, Dallas Card, Noah A. Smith• 2019

Related benchmarks

TaskDatasetResultRank
Multi-class text classificationYelp
Micro-F155.1
33
Multi-class text classificationAG-News
Micro-F10.876
33
Text ClassificationYahoo
Micro F1 Score64.4
33
Showing 3 of 3 rows

Other info

Follow for update