Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Direct Large Language Model Alignment Through Self-Rewarding Contrastive Prompt Distillation

About

Aligning large language models (LLMs) with human expectations without human-annotated preference data is an important problem. In this paper, we propose a method to evaluate the response preference by using the output probabilities of response pairs under contrastive prompt pairs, which could achieve better performance on LLaMA2-7B and LLaMA2-13B compared to RLAIF. Based on this, we propose an automatic alignment method, Direct Large Model Alignment (DLMA). First, we use contrastive prompt pairs to automatically generate preference data. Then, we continue to evaluate the generated preference data using contrastive prompt pairs and calculate a self-rewarding score. Finally, we use the DPO algorithm to effectively align LLMs by combining this self-rewarding score. In the experimental stage, our DLMA method could surpass the \texttt{RLHF} method without relying on human-annotated preference data.

Aiwei Liu, Haoping Bai, Zhiyun Lu, Xiang Kong, Simon Wang, Jiulong Shan, Meng Cao, Lijie Wen• 2024

Related benchmarks

TaskDatasetResultRank
Assistant Response Alignment (Helpfulness and Harmlessness)HH-RLHF (test)--
31
Harmfulness EvaluationPKU-SafeRLHF
Beaver-7B-Cost Score-1.11
10
Harmfulness EvaluationHH-Harmless
Beaver-7B Cost Score3.25
10
Preference EvaluationPKU-SafeRLHF
Win Rate57
8
Preference EvaluationHH-Harmless
Win Rate60
8
Preference EvaluationHH-Helpful
Win Count52
8
LLM AlignmentHH-Harmless (test)
Win Rate59
2
LLM AlignmentPKU-Safety (test)
Win Rate58
2
Showing 8 of 8 rows

Other info

Code

Follow for update