Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DExperts: Decoding-Time Controlled Text Generation with Experts and Anti-Experts

About

Despite recent advances in natural language generation, it remains challenging to control attributes of generated text. We propose DExperts: Decoding-time Experts, a decoding-time method for controlled text generation that combines a pretrained language model with "expert" LMs and/or "anti-expert" LMs in a product of experts. Intuitively, under the ensemble, tokens only get high probability if they are considered likely by the experts, and unlikely by the anti-experts. We apply DExperts to language detoxification and sentiment-controlled generation, where we outperform existing controllable generation methods on both automatic and human evaluations. Moreover, because DExperts operates only on the output of the pretrained LM, it is effective with (anti-)experts of smaller size, including when operating on GPT-3. Our work highlights the promise of tuning small LMs on text with (un)desirable attributes for efficient decoding-time steering.

Alisa Liu, Maarten Sap, Ximing Lu, Swabha Swayamdipta, Chandra Bhagavatula, Noah A. Smith, Yejin Choi• 2021

Related benchmarks

TaskDatasetResultRank
Instruction FollowingAlpacaEval 2.0 (test)
LC Win Rate (%)16.58
71
Language model detoxificationRealToxicityPrompts (test)
Distinct-158
54
Toxicity MitigationRealToxicityPrompts challenging
Avg Toxicity (Max)52.7
46
DetoxificationAttaQ benchmark
Avg Toxicity (Max)0.165
32
DetoxificationRealToxicityPrompts challenging
Max Toxicity0.527
32
Sentiment SteeringOpenWebText Neutral to Negative (test)
Perplexity (PPL)32.86
27
Sentiment SteeringOpenWebText Neutral to Positive (test)
Perplexity (PPL)30.52
27
DetoxificationRealToxicityPrompts
Avg Max Toxicity0.293
22
Toxicity EvaluationBOLD 23679 prompts (test)
Avg Toxicity (Max)0.052
18
Controllable Language Generation-ve Sentiment Pointwise Constraint
Dist-30.861
17
Showing 10 of 27 rows

Other info

Code

Follow for update