Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Exploiting Cloze Questions for Few Shot Text Classification and Natural Language Inference

About

Some NLP tasks can be solved in a fully unsupervised fashion by providing a pretrained language model with "task descriptions" in natural language (e.g., Radford et al., 2019). While this approach underperforms its supervised counterpart, we show in this work that the two ideas can be combined: We introduce Pattern-Exploiting Training (PET), a semi-supervised training procedure that reformulates input examples as cloze-style phrases to help language models understand a given task. These phrases are then used to assign soft labels to a large set of unlabeled examples. Finally, standard supervised training is performed on the resulting training set. For several tasks and languages, PET outperforms supervised training and strong semi-supervised approaches in low-resource settings by a large margin.

Timo Schick, Hinrich Sch\"utze• 2020

Related benchmarks

TaskDatasetResultRank
Subjectivity ClassificationSubj
Accuracy88.7
329
Sentiment AnalysisIMDB (test)
Accuracy81.8
248
Sentiment AnalysisSST-2
Accuracy82.63
165
Sentiment AnalysisCR
Accuracy88.8
141
Sentiment AnalysisSST-2 (test)
Accuracy81.2
136
Text ClassificationAGNews--
119
Topic ClassificationAG News (test)
Accuracy73.2
98
Paraphrase DetectionMRPC
Avg Accuracy63.4
89
Word Sense DisambiguationWiC
Avg Accuracy51.6
87
Sentiment AnalysisIMDB
Accuracy77.31
67
Showing 10 of 43 rows

Other info

Follow for update