Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RLCD: Reinforcement Learning from Contrastive Distillation for Language Model Alignment

About

We propose Reinforcement Learning from Contrastive Distillation (RLCD), a method for aligning language models to follow principles expressed in natural language (e.g., to be more harmless) without using human feedback. RLCD creates preference pairs from two contrasting model outputs, one using a positive prompt designed to encourage following the given principles, and one using a negative prompt designed to encourage violating them. Using two different prompts causes model outputs to be more differentiated on average, resulting in cleaner preference labels in the absence of human annotations. We then use the preference pairs to train a preference model, which is in turn used to improve a base unaligned language model via reinforcement learning. Empirically, RLCD outperforms RLAIF (Bai et al., 2022b) and context distillation (Huang et al., 2022) baselines across three diverse alignment tasks--harmlessness, helpfulness, and story outline generation--and when using both 7B and 30B model scales for simulating preference data.

Kevin Yang, Dan Klein, Asli Celikyilmaz, Nanyun Peng, Yuandong Tian• 2023

Related benchmarks

TaskDatasetResultRank
Personalized response selectionPCogAlignBench LS1->LS2
P. Score4.014
14
Personalized response selectionPCogAlignBench LS2->LS1
P Score4.016
14
Personalized response selectionPCogAlignBench LS1->LS1
P Score3.996
14
Personalized response selectionPCogAlignBench LS2->LS2
P Score4.006
14
Personalized response selectionPCogAlignBench Average
P Score3.996
14
Personalized Response GenerationLS1 -> LS2 (test)
RSA3.929
13
Harmfulness EvaluationPKU-SafeRLHF
Beaver-7B-Cost Score-0.14
10
Harmfulness EvaluationHH-Harmless
Beaver-7B Cost Score3.89
10
Showing 8 of 8 rows

Other info

Follow for update