Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery

About

The strength of modern generative models lies in their ability to be controlled through text-based prompts. Typical "hard" prompts are made from interpretable words and tokens, and must be hand-crafted by humans. There are also "soft" prompts, which consist of continuous feature vectors. These can be discovered using powerful optimization methods, but they cannot be easily interpreted, re-used across models, or plugged into a text-based interface. We describe an approach to robustly optimize hard text prompts through efficient gradient-based optimization. Our approach automatically generates hard text-based prompts for both text-to-image and text-to-text applications. In the text-to-image setting, the method creates hard prompts for diffusion models, allowing API users to easily generate, discover, and mix and match image concepts without prior knowledge on how to prompt the model. In the text-to-text setting, we show that hard prompts can be automatically discovered that are effective in tuning LMs for classification.

Yuxin Wen, Neel Jain, John Kirchenbauer, Micah Goldblum, Jonas Geiping, Tom Goldstein• 2023

Related benchmarks

TaskDatasetResultRank
Text ClassificationSST-2 (test)
Accuracy70
185
Subjectivity ClassificationSubj (test)
Accuracy53.1
125
Text ClassificationMR (test)
Accuracy67.9
99
Bias discoveryFemale-biased prompts
Female Proportion75
42
Topic ClassificationYahoo (test)
Accuracy27
36
Text ClassificationYelp P. (test)
Accuracy85.9
34
Biased Prompt DiscoveryBlack-biased prompts
Black Bias Proportion4
18
White-biased prompt discoveryWhite-biased prompts
White Score76
18
Bias EvaluationMale-biased prompts
Male Bias (Base)0.8
14
Text ClassificationAG's News (test)
A-rate43.7
13
Showing 10 of 10 rows

Other info

Follow for update