Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset

About

In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety alignment in large language models (LLMs). This dataset uniquely separates annotations of helpfulness and harmlessness for question-answering pairs, thus offering distinct perspectives on these crucial attributes. In total, we have gathered safety meta-labels for 333,963 question-answer (QA) pairs and 361,903 pairs of expert comparison data for both the helpfulness and harmlessness metrics. We further showcase applications of BeaverTails in content moderation and reinforcement learning with human feedback (RLHF), emphasizing its potential for practical safety measures in LLMs. We believe this dataset provides vital resources for the community, contributing towards the safe development and deployment of LLMs. Our project page is available at the following URL: https://sites.google.com/view/pku-beavertails.

Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Chi Zhang, Ruiyang Sun, Yizhou Wang, Yaodong Yang• 2023

Related benchmarks

TaskDatasetResultRank
Commonsense ReasoningHellaSwag
Accuracy56.17
1891
Commonsense ReasoningWinoGrande
Accuracy69.69
372
Word PredictionLAMBADA
Accuracy73.04
148
Massive Multitask Language UnderstandingMMLU
Accuracy60.08
117
Safety ClassificationSafeRLHF
F1 Score0.721
48
Mathematical ReasoningGSM8K
Accuracy (Acc)76.83
42
Response ClassificationEXPGUARD (test)
Financial Score76.9
40
Response Harmfulness DetectionXSTEST-RESP
Response Harmfulness F183.6
34
Response Harmfulness ClassificationWildGuard (test)
F1 (Total)63.4
30
Commonsense ReasoningHellaSwag
HS Score44.22
28
Showing 10 of 21 rows

Other info

Code

Follow for update