Fine-Tuning Language Models from Human Preferences
About
Reward learning enables the application of reinforcement learning (RL) to tasks where reward is defined by human judgment, building a model of reward by asking humans questions. Most work on reward learning has used simulated environments, but complex information about values is often expressed in natural language, and we believe reward learning for language is a key to making RL practical and safe for real-world tasks. In this paper, we build on advances in generative pretraining of language models to apply reward learning to four natural language tasks: continuing text with positive sentiment or physically descriptive language, and summarization tasks on the TL;DR and CNN/Daily Mail datasets. For stylistic continuation we achieve good results with only 5,000 comparisons evaluated by humans. For summarization, models trained with 60,000 comparisons copy whole sentences from the input but skip irrelevant preamble; this leads to reasonable ROUGE scores and very good performance according to our human labelers, but may be exploiting the fact that labelers rely on simple heuristics.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Controllable Language Generation | -ve Sentiment Pointwise Constraint | Dist-30.94 | 17 | |
| Controllable Language Generation | Word Amazing Pointwise Constraint | Control Score0.82 | 5 | |
| Controllable Language Generation | Wordlist Science Pointwise Constraint | Ctrl Score100 | 5 | |
| Controllable Language Generation | +ve Sentiment Pointwise Constraint | Control Success Rate98 | 5 | |
| Detoxification | RealToxicityPrompts (test) | Toxicity Score (Avg)0.13 | 5 | |
| Controllable Language Generation | Word WikiLeaks Pointwise Constraint | Ctrl Score0.68 | 5 | |
| Controllable Language Generation | Wordlist Politics Pointwise Constraint | Ctrl1 | 5 | |
| Offline Reinforcement Learning | D4RL MuJoCo v2 | Ant Return (Random)31.52 | 4 |