Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

PTR: Prompt Tuning with Rules for Text Classification

About

Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks. Prompt tuning has achieved promising results on some few-class classification tasks such as sentiment classification and natural language inference. However, manually designing lots of language prompts is cumbersome and fallible. For those auto-generated prompts, it is also expensive and time-consuming to verify their effectiveness in non-few-shot scenarios. Hence, it is still challenging for prompt tuning to address many-class classification tasks. To this end, we propose prompt tuning with rules (PTR) for many-class text classification and apply logic rules to construct prompts with several sub-prompts. In this way, PTR is able to encode prior knowledge of each class into prompt tuning. We conduct experiments on relation classification, a typical and complicated many-class classification task, and the results show that PTR can significantly and consistently outperform existing state-of-the-art baselines. This indicates that PTR is a promising approach to take advantage of both human prior knowledge and PLMs for those complicated classification tasks.

Xu Han, Weilin Zhao, Ning Ding, Zhiyuan Liu, Maosong Sun• 2021

Related benchmarks

TaskDatasetResultRank
Relation ExtractionTACRED (test)
F1 Score72.4
194
Dialogue Relation ExtractionDialogRE (test)
F163.2
69
Relation ExtractionSemEval (test)
Micro F189.9
55
Relation ExtractionTACRED-Revisit (test)
Micro F181.4
19
Dialogue Relation ExtractionDialogRE V1 (test)
F1 Score63.2
13
Relation ExtractionRe-TACRED (test)
F1 Score62.1
12
Relation ClassificationTACRED Original (test)
Micro F172.4
11
Relation ClassificationTACRED K=8 low-resource original
Micro F10.281
11
Relation ClassificationTACRED K=16 original (low-resource)
Micro F130.7
11
Relation ClassificationTACRED K=32 original (low-resource)
Micro F132.1
11
Showing 10 of 23 rows

Other info

Follow for update