CTRL: A Conditional Transformer Language Model for Controllable Generation
About
Large-scale language models show promising text generation capabilities, but users cannot easily control particular aspects of the generated text. We release CTRL, a 1.63 billion-parameter conditional transformer language model, trained to condition on control codes that govern style, content, and task-specific behavior. Control codes were derived from structure that naturally co-occurs with raw text, preserving the advantages of unsupervised learning while providing more explicit control over text generation. These codes also allow CTRL to predict which parts of the training data are most likely given a sequence. This provides a potential method for analyzing large amounts of data via model-based source attribution. We have released multiple full-sized, pretrained versions of CTRL at https://github.com/salesforce/ctrl.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Sentiment Steering | OpenWebText Neutral to Negative (test) | Perplexity (PPL)35.94 | 27 | |
| Sentiment Steering | OpenWebText Neutral to Positive (test) | Perplexity (PPL)43.79 | 27 | |
| Class-Conditional Language Generation | AG-News | MAUVE (World)0.806 | 16 | |
| Attribute-Controlled Dialogue Generation | DailyDialog-CG (test) | Emotion Accuracy (E-ACC)67.34 | 12 | |
| Multi-Aspect Controllable Text Generation | Fyelp CompMCTG (Hold-Out) | Acomp82.02 | 12 | |
| Multi-Aspect Controllable Text Generation | Fyelp ACD CompMCTG | Acomp74.63 | 12 | |
| Multi-attribute Conditional Text Generation | CompMCTG Compositional Few-Shot 1.0 (test) | Accuracy65.94 | 10 | |
| Multi-Aspect Controllable Text Generation | CompMCTG Overall Summary Average 1.0 | Aavg Score76.17 | 10 | |
| Multi-Constraint Text Generation | CompMCTG Average 1.0 | Relevance (avg)3.77 | 10 | |
| Multi-Aspect Controllable Text Generation | CompMCTG 1.0 (Original) | Aid Score79.1 | 10 |