Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback
About
We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| LLM Judge Agreement | MT-bench First Turn | Agreement Rate0.89 | 34 | |
| Instruction Following | Vicuna benchmark zero-shot | Pairwise Score (ChatGPT vs Sys)55.5 | 21 | |
| Instruction Following | AlpacaEval, MT-bench, Vicuna-bench | AlpacaEval Score88.4 | 13 | |
| Response Generation | HH dataset | Reward-1.24 | 13 | |
| Jailbreak Detection | JailBreakBench Single Turn 35 | F1 Score86 | 10 | |
| Inference Latency | Multi-turn Adversarial Defense Latency Benchmark (inference) | Latency (ms)43 | 10 | |
| Multi-turn Jailbreak Detection | HarmBench and DEFCON Multi-turn Jailbreak N=1,010 (test) | F1 Score51 | 10 | |
| Judge Agreement | Chatbot Arena Random = 50% (S2) | Agreement84 | 10 | |
| Judge Agreement | Chatbot Arena Random = 33% (S1) | Agreement Rate53 | 10 | |
| Multiple-choice Question Answering | BIG-bench HHH Eval | Overall Score86 | 7 |