Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncertainty-aware Self-training for Text Classification with Few Labels

About

Recent success of large-scale pre-trained language models crucially hinge on fine-tuning them on large amounts of labeled data for the downstream task, that are typically expensive to acquire. In this work, we study self-training as one of the earliest semi-supervised learning approaches to reduce the annotation bottleneck by making use of large-scale unlabeled data for the target task. Standard self-training mechanism randomly samples instances from the unlabeled pool to pseudo-label and augment labeled data. In this work, we propose an approach to improve self-training by incorporating uncertainty estimates of the underlying neural network leveraging recent advances in Bayesian deep learning. Specifically, we propose (i) acquisition functions to select instances from the unlabeled pool leveraging Monte Carlo (MC) Dropout, and (ii) learning mechanism leveraging model confidence for self-training. As an application, we focus on text classification on five benchmark datasets. We show our methods leveraging only 20-30 labeled samples per class for each task for training and for validation can perform within 3% of fully supervised pre-trained language models fine-tuned on thousands of labeled instances with an aggregate accuracy of 91% and improving by upto 12% over baselines.

Subhabrata Mukherjee, Ahmed Hassan Awadallah• 2020

Related benchmarks

TaskDatasetResultRank
Question ClassificationTREC
Accuracy65.52
205
Text ClassificationAGNews
Accuracy86.28
119
Sentiment ClassificationIMDB
Accuracy84.56
41
Word Sense DisambiguationWiC (dev)
Accuracy63.48
32
Sentiment ClassificationYelp
Accuracy90.53
24
Slot FillingMIT-R
Accuracy74.41
13
Relation ClassificationChemProt
Accuracy52.14
13
Showing 7 of 7 rows

Other info

Follow for update