Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GeDi: Generative Discriminator Guided Sequence Generation

About

While large-scale language models (LMs) are able to imitate the distribution of natural language well enough to generate realistic text, it is difficult to control which regions of the distribution they generate. This is especially problematic because datasets used for training large LMs usually contain significant toxicity, hate, bias, and negativity. We propose GeDi as an efficient method for using smaller LMs as generative discriminators to guide generation from large LMs to make them safer and more controllable. GeDi guides generation at each step by computing classification probabilities for all possible next tokens via Bayes rule by normalizing over two class-conditional distributions; one conditioned on the desired attribute, or control code, and another conditioned on the undesired attribute, or anti control code. We find that GeDi gives stronger controllability than the state of the art method while also achieving generation speeds more than 30 times faster. Additionally, training GeDi on only four topics allows us to controllably generate new topics zero-shot from just a keyword, unlocking a new capability that previous controllable generation methods do not have. Lastly, we show that GeDi can make GPT-2 (1.5B parameters) significantly less toxic without sacrificing linguistic quality, making it by far the most practical existing method for detoxifying large language models while maintaining a fast generation speed.

Ben Krause, Akhilesh Deepak Gotmare, Bryan McCann, Nitish Shirish Keskar, Shafiq Joty, Richard Socher, Nazneen Fatema Rajani• 2020

Related benchmarks

TaskDatasetResultRank
Language model detoxificationRealToxicityPrompts (test)
Distinct-162
54
Toxicity MitigationRealToxicityPrompts challenging
Avg Toxicity (Max)29.7
46
DetoxificationRealToxicityPrompts challenging
Max Toxicity0.297
32
DetoxificationAttaQ benchmark
Avg Toxicity (Max)0.155
32
DetoxificationJigsaw (test)
Perplexity (PPL)81.6
29
Sentiment SteeringOpenWebText Neutral to Positive (test)
Perplexity (PPL)58.41
27
Sentiment SteeringOpenWebText Neutral to Negative (test)
Perplexity (PPL)84.11
27
Controllable Text GenerationYelp (test)
Perplexity (PPL)616.9
20
Toxicity EvaluationBOLD 23679 prompts (test)
Avg Toxicity (Max)0.051
18
Controllable Language Generation-ve Sentiment Pointwise Constraint
Dist-30.832
17
Showing 10 of 28 rows

Other info

Follow for update