Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Diffusion Guided Language Modeling

About

Current language models demonstrate remarkable proficiency in text generation. However, for many applications it is desirable to control attributes, such as sentiment, or toxicity, of the generated language -- ideally tailored towards each specific use case and target audience. For auto-regressive language models, existing guidance methods are prone to decoding errors that cascade during generation and degrade performance. In contrast, text diffusion models can easily be guided with, for example, a simple linear sentiment classifier -- however they do suffer from significantly higher perplexity than auto-regressive alternatives. In this paper we use a guided diffusion model to produce a latent proposal that steers an auto-regressive language model to generate text with desired properties. Our model inherits the unmatched fluency of the auto-regressive approach and the plug-and-play flexibility of diffusion. We show that it outperforms previous plug-and-play guidance methods across a wide range of benchmark data sets. Further, controlling a new attribute in our framework is reduced to training a single logistic regression classifier.

Justin Lovelace, Varsha Kishore, Yiwei Chen, Kilian Q. Weinberger• 2024

Related benchmarks

TaskDatasetResultRank
Controllable Language Generation-ve Sentiment Pointwise Constraint
Dist-30.868
17
Language GenerationC4 (val)
OLMo Perplexity19.4
15
Toxicity MitigationRealToxicityPrompts (test)
Full Toxicity10.1
14
Controllable Text GenerationSentiment Control Positive Target
Positive Prop. (RoBERTa)98.9
12
Language GenerationOpenWebText (val)
OLMo Perplexity14.2
8
Language GenerationExperimental Setup
Relative Runtime1.7
8
Showing 6 of 6 rows

Other info

Code

Follow for update