Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

BeaverTails: Towards Improved Safety Alignment of LLM via a Human-Preference Dataset

About

In this paper, we introduce the BeaverTails dataset, aimed at fostering research on safety alignment in large language models (LLMs). This dataset uniquely separates annotations of helpfulness and harmlessness for question-answering pairs, thus offering distinct perspectives on these crucial attributes. In total, we have gathered safety meta-labels for 333,963 question-answer (QA) pairs and 361,903 pairs of expert comparison data for both the helpfulness and harmlessness metrics. We further showcase applications of BeaverTails in content moderation and reinforcement learning with human feedback (RLHF), emphasizing its potential for practical safety measures in LLMs. We believe this dataset provides vital resources for the community, contributing towards the safe development and deployment of LLMs. Our project page is available at the following URL: https://sites.google.com/view/pku-beavertails.

Jiaming Ji, Mickel Liu, Juntao Dai, Xuehai Pan, Chi Zhang, Ce Bian, Chi Zhang, Ruiyang Sun, Yizhou Wang, Yaodong Yang• 2023

Related benchmarks

TaskDatasetResultRank
Response Harmfulness DetectionXSTEST-RESP
Response Harmfulness F183.6
34
Safety ClassificationSafeRLHF
F1 Score0.721
32
Response Harmfulness ClassificationWildGuard (test)
F1 (Total)63.4
30
Response Harmfulness DetectionHarmBench
F1 Score58.4
23
Response Harmfulness DetectionBeavertails
F1 Score89.9
18
Refusal DetectionWildGuard (test)
F1 (Harmful)80.7
14
Response Harmfulness ClassificationPublic Response Harmfulness Benchmarks (HarmBenchResponse, SafeRLHF, BeaverTails, XSTEST-RESP)
HarmBenchResponse Score58.4
12
Showing 7 of 7 rows

Other info

Code

Follow for update