Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Control Prefixes for Parameter-Efficient Text Generation

About

Prefix-tuning is a powerful lightweight technique for adapting a large pre-trained language model to a downstream application. However, it uses the same dataset-level tuned prompt for all examples in the dataset. We extend this idea and propose a dynamic method, Control Prefixes, which allows for the inclusion of conditional input-dependent information, combining the benefits of prompt tuning and controlled generation. The method incorporates attribute-level learnable representations into different layers of a pre-trained transformer, allowing for the generated text to be guided in a particular direction. We provide a systematic evaluation of the technique and apply it to five datasets from the GEM benchmark for natural language generation (NLG). Although the aim is to develop a parameter-efficient model, we show Control Prefixes can even outperform full fine-tuning methods. We present state-of-the-art results on several data-to-text datasets, including WebNLG.

Jordan Clive, Kris Cao, Marek Rei• 2021

Related benchmarks

TaskDatasetResultRank
SummarizationXSum (test)
ROUGE-220.84
231
Data-to-text generationDART (test)
BLEU52
42
Data-to-text generationWebNLG (test)
BLEU62.27
39
Sentence SimplificationASSET English (test)
SARI43.58
37
Data-to-text generationE2E (test)
BLEU44.2
33
Data-to-text generationCleaned E2E (test)
BLEU44.15
9
Data-to-text generationWebNLG+ 2020 (test)
BLEU0.5541
5
Data-to-TextDART v1.1.1 (test)
BLEU51.95
4
Showing 8 of 8 rows

Other info

Follow for update