PTR: Prompt Tuning with Rules for Text Classification
About
Fine-tuned pre-trained language models (PLMs) have achieved awesome performance on almost all NLP tasks. By using additional prompts to fine-tune PLMs, we can further stimulate the rich knowledge distributed in PLMs to better serve downstream tasks. Prompt tuning has achieved promising results on some few-class classification tasks such as sentiment classification and natural language inference. However, manually designing lots of language prompts is cumbersome and fallible. For those auto-generated prompts, it is also expensive and time-consuming to verify their effectiveness in non-few-shot scenarios. Hence, it is still challenging for prompt tuning to address many-class classification tasks. To this end, we propose prompt tuning with rules (PTR) for many-class text classification and apply logic rules to construct prompts with several sub-prompts. In this way, PTR is able to encode prior knowledge of each class into prompt tuning. We conduct experiments on relation classification, a typical and complicated many-class classification task, and the results show that PTR can significantly and consistently outperform existing state-of-the-art baselines. This indicates that PTR is a promising approach to take advantage of both human prior knowledge and PLMs for those complicated classification tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Relation Extraction | TACRED (test) | F1 Score72.4 | 194 | |
| Dialogue Relation Extraction | DialogRE (test) | F163.2 | 69 | |
| Relation Extraction | SemEval (test) | Micro F189.9 | 55 | |
| Relation Extraction | TACRED-Revisit (test) | Micro F181.4 | 19 | |
| Dialogue Relation Extraction | DialogRE V1 (test) | F1 Score63.2 | 13 | |
| Relation Extraction | Re-TACRED (test) | F1 Score62.1 | 12 | |
| Relation Classification | TACRED Original (test) | Micro F172.4 | 11 | |
| Relation Classification | TACRED K=8 low-resource original | Micro F10.281 | 11 | |
| Relation Classification | TACRED K=16 original (low-resource) | Micro F130.7 | 11 | |
| Relation Classification | TACRED K=32 original (low-resource) | Micro F132.1 | 11 |