Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do-Not-Answer: A Dataset for Evaluating Safeguards in LLMs

About

With the rapid evolution of large language models (LLMs), new and hard-to-predict harmful capabilities are emerging. This requires developers to be able to identify risks through the evaluation of "dangerous capabilities" in order to responsibly deploy LLMs. In this work, we collect the first open-source dataset to evaluate safeguards in LLMs, and deploy safer open-source LLMs at a low cost. Our dataset is curated and filtered to consist only of instructions that responsible language models should not follow. We annotate and assess the responses of six popular LLMs to these instructions. Based on our annotation, we proceed to train several BERT-like classifiers, and find that these small classifiers can achieve results that are comparable with GPT-4 on automatic safety evaluation. Warning: this paper contains example data that may be offensive, harmful, or biased.

Yuxia Wang, Haonan Li, Xudong Han, Preslav Nakov, Timothy Baldwin• 2023

Related benchmarks

TaskDatasetResultRank
Response Harmfulness DetectionXSTEST-RESP
Response Harmfulness F180.5
34
Response Harmfulness ClassificationWildGuard (test)
F1 (Total)63.2
30
Refusal DetectionWildGuard (test)
F1 (Harmful)84.1
14
Response Harmfulness ClassificationPublic Response Harmfulness Benchmarks (HarmBenchResponse, SafeRLHF, BeaverTails, XSTEST-RESP)
HarmBenchResponse Score62.1
12
Refusal DetectionXSTEST-RESP (full)
RR (F1)74.3
9
Showing 5 of 5 rows

Other info

Follow for update