Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mitigating Word Bias in Zero-shot Prompt-based Classifiers

About

Prompt-based classifiers are an attractive approach for zero-shot classification. However, the precise choice of the prompt template and label words can largely influence performance, with semantically equivalent settings often showing notable performance difference. This discrepancy can be partly attributed to word biases, where the classifier may be biased towards classes. To address this problem, it is possible to optimise classification thresholds on a labelled data set, however, this mitigates some of the advantages of prompt-based classifiers. This paper instead approaches this problem by examining the expected marginal probabilities of the classes. Here, probabilities are reweighted to have a uniform prior over classes, in an unsupervised fashion. Further, we draw a theoretical connection between the class priors and the language models' word prior, and offer the ability to set a threshold in a zero-resource fashion. We show that matching class priors correlates strongly with the oracle upper bound performance and demonstrate large consistent performance gains for prompt settings over a range of NLP tasks.

Adian Liusie, Potsawee Manakul, Mark J. F. Gales• 2023

Related benchmarks

TaskDatasetResultRank
Image ClassificationSTL-10 (test)
Accuracy98.4
357
Image ClassificationStanford Cars (test)
Accuracy39.8
316
Image ClassificationCIFAR10 (test)
Test Accuracy91.3
284
Image ClassificationDTD (test)
Accuracy42.1
257
Image ClassificationCaltech101 (test)
Accuracy59.3
159
Image ClassificationImageNet-Sketch (test)--
153
Image ClassificationEuroSAT (test)
Accuracy41.6
141
Image ClassificationSUN397 (test)
Top-1 Accuracy6.7
136
Image ClassificationFlowers102 (test)
Accuracy54
119
Image ClassificationImageNet-R (test)
Accuracy16.7
118
Showing 10 of 20 rows

Other info

Follow for update