Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

The Power of Prompt Tuning for Low-Resource Semantic Parsing

About

Prompt tuning has recently emerged as an effective method for adapting pre-trained language models to a number of language understanding and generation tasks. In this paper, we investigate prompt tuning for semantic parsing -- the task of mapping natural language utterances onto formal meaning representations. On the low-resource splits of Overnight and TOPv2, we find that a prompt tuned T5-xl significantly outperforms its fine-tuned counterpart, as well as strong GPT-3 and BART baselines. We also conduct ablation studies across different model scales and target representations, finding that, with increasing model scale, prompt tuned T5 models improve at generating target representations that are far from the pre-training distribution.

Nathan Schucher, Siva Reddy, Harm de Vries• 2021

Related benchmarks

TaskDatasetResultRank
Image ClassificationFGVC-Aircraft (test)
Accuracy40.44
231
Image ClassificationImageNet V2 (Target)
Accuracy64.2
42
Image ClassificationImageNet-Sketch (Target)
Accuracy47.99
30
Image ClassificationImageNet-R Target
Accuracy75.21
29
Semantic ParsingOVERNIGHT v1.0 (test)
Blocks Domain Score61.9
26
Image ClassificationImageNet (source)
Accuracy71.51
23
Image ClassificationStanfordCars (test)
Base Accuracy78.12
11
Image ClassificationImageNet-A Target
Accuracy49.71
11
Image ClassificationDTD base-to-new generalization
Base Accuracy79.44
5
Semantic ParsingTOP 25 SPIS v2
Reminder Score64.2
3
Showing 10 of 12 rows

Other info

Follow for update