Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback

About

We apply preference modeling and reinforcement learning from human feedback (RLHF) to finetune language models to act as helpful and harmless assistants. We find this alignment training improves performance on almost all NLP evaluations, and is fully compatible with training for specialized skills such as python coding and summarization. We explore an iterated online mode of training, where preference models and RL policies are updated on a weekly cadence with fresh human feedback data, efficiently improving our datasets and models. Finally, we investigate the robustness of RLHF training, and identify a roughly linear relation between the RL reward and the square root of the KL divergence between the policy and its initialization. Alongside our main results, we perform peripheral analyses on calibration, competing objectives, and the use of OOD detection, compare our models with human writers, and provide samples from our models using prompts appearing in recent related work.

Yuntao Bai, Andy Jones, Kamal Ndousse, Amanda Askell, Anna Chen, Nova DasSarma, Dawn Drain, Stanislav Fort, Deep Ganguli, Tom Henighan, Nicholas Joseph, Saurav Kadavath, Jackson Kernion, Tom Conerly, Sheer El-Showk, Nelson Elhage, Zac Hatfield-Dodds, Danny Hernandez, Tristan Hume, Scott Johnston, Shauna Kravec, Liane Lovitt, Neel Nanda, Catherine Olsson, Dario Amodei, Tom Brown, Jack Clark, Sam McCandlish, Chris Olah, Ben Mann, Jared Kaplan• 2022

Related benchmarks

TaskDatasetResultRank
LLM Judge AgreementMT-bench First Turn
Agreement Rate0.89
34
Instruction FollowingVicuna benchmark zero-shot
Pairwise Score (ChatGPT vs Sys)55.5
21
Instruction FollowingAlpacaEval, MT-bench, Vicuna-bench
AlpacaEval Score88.4
13
Response GenerationHH dataset
Reward-1.24
13
Jailbreak DetectionJailBreakBench Single Turn 35
F1 Score86
10
Inference LatencyMulti-turn Adversarial Defense Latency Benchmark (inference)
Latency (ms)43
10
Multi-turn Jailbreak DetectionHarmBench and DEFCON Multi-turn Jailbreak N=1,010 (test)
F1 Score51
10
Judge AgreementChatbot Arena Random = 50% (S2)
Agreement84
10
Judge AgreementChatbot Arena Random = 33% (S1)
Agreement Rate53
10
Multiple-choice Question AnsweringBIG-bench HHH Eval
Overall Score86
7
Showing 10 of 16 rows

Other info

Follow for update