Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference

About

Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in low-resource settings by a large margin.

Timo Schick, Hinrich Sch\"utze• 2020

Related benchmarks

TaskDatasetResultRank
Subjectivity ClassificationSubj
Accuracy88.7
266
Sentiment AnalysisIMDB (test)
Accuracy81.8
248
Sentiment AnalysisSST-2
Accuracy82.63
156
Sentiment AnalysisSST-2 (test)
Accuracy81.2
136
Sentiment AnalysisCR
Accuracy88.8
123
Text ClassificationAGNews--
119
Topic ClassificationAG News (test)
Accuracy73.2
98
Paraphrase DetectionMRPC
Avg Accuracy63.4
89
Word Sense DisambiguationWiC
Avg Accuracy51.6
84
Topic ClassificationDBPedia (test)
Accuracy71.3
64
Showing 10 of 43 rows

Other info

Follow for update