Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Gradient-Based Constrained Sampling from Language Models

About

Large pretrained language models generate fluent text but are notoriously hard to controllably sample from. In this work, we study constrained sampling from such language models: generating text that satisfies user-defined constraints, while maintaining fluency and the model's performance in a downstream task. We propose MuCoLa -- a sampling procedure that combines the log-likelihood of the language model with arbitrary (differentiable) constraints in a single energy function, and then generates samples in a non-autoregressive manner. Specifically, it initializes the entire output sequence with noise and follows a Markov chain defined by Langevin Dynamics using the gradients of the energy function. We evaluate MuCoLa on text generation with soft and hard constraints as well as their combinations obtaining significant improvements over competitive baselines for toxicity avoidance, sentiment control, and keyword-guided generation.

Sachin Kumar, Biswajit Paria, Yulia Tsvetkov• 2022

Related benchmarks

TaskDatasetResultRank
Language model detoxificationRealToxicityPrompts (test)
Distinct-155
54
Controlled Text GenerationBase Language Model Efficiency Comparison
Speed Ratio24.03
8
Toxicity avoidanceRealToxicityPrompts
Avg Max Toxicity Score0.309
4
Sentiment ControlYelp polarity corpus (test)
Internal Classification Accuracy93.22
4
Constrained Text GenerationCommonGen
Coverage99.8
3
Keyword-guided topic controlKeyword-guided topic control dataset
Succ. (%)100
3
Showing 6 of 6 rows

Other info

Follow for update