Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

WildGuard: Open One-Stop Moderation Tools for Safety Risks, Jailbreaks, and Refusals of LLMs

About

We introduce WildGuard -- an open, light-weight moderation tool for LLM safety that achieves three goals: (1) identifying malicious intent in user prompts, (2) detecting safety risks of model responses, and (3) determining model refusal rate. Together, WildGuard serves the increasing needs for automatic safety moderation and evaluation of LLM interactions, providing a one-stop tool with enhanced accuracy and broad coverage across 13 risk categories. While existing open moderation tools such as Llama-Guard2 score reasonably well in classifying straightforward model interactions, they lag far behind a prompted GPT-4, especially in identifying adversarial jailbreaks and in evaluating models' refusals, a key measure for evaluating safety behaviors in model responses. To address these challenges, we construct WildGuardMix, a large-scale and carefully balanced multi-task safety moderation dataset with 92K labeled examples that cover vanilla (direct) prompts and adversarial jailbreaks, paired with various refusal and compliance responses. WildGuardMix is a combination of WildGuardTrain, the training data of WildGuard, and WildGuardTest, a high-quality human-annotated moderation test set with 5K labeled items covering broad risk scenarios. Through extensive evaluations on WildGuardTest and ten existing public benchmarks, we show that WildGuard establishes state-of-the-art performance in open-source safety moderation across all the three tasks compared to ten strong existing open-source moderation models (e.g., up to 26.4% improvement on refusal detection). Importantly, WildGuard matches and sometimes exceeds GPT-4 performance (e.g., up to 3.9% improvement on prompt harmfulness identification). WildGuard serves as a highly effective safety moderator in an LLM interface, reducing the success rate of jailbreak attacks from 79.8% to 2.4%.

Seungju Han, Kavel Rao, Allyson Ettinger, Liwei Jiang, Bill Yuchen Lin, Nathan Lambert, Yejin Choi, Nouha Dziri• 2024

Related benchmarks

TaskDatasetResultRank
Response Harmfulness DetectionXSTEST-RESP
Response Harmfulness F194.7
34
Safety ClassificationSafeRLHF
F1 Score0.642
32
Response Harmfulness ClassificationWildGuard (test)
F1 (Total)75.4
30
Safety ClassificationWildGuardMix (test)--
27
Text-based safety moderationToxic Chat
F1 Score70.8
24
Response Harmfulness DetectionHarmBench
F1 Score86.3
23
Response ClassificationBeaverTails V Text-Image Response
F1 Score73.39
23
Adversarial and Jailbreaking Attack DetectionBeavertails
AUROC0.8218
20
Adversarial and Jailbreaking Attack DetectionMaliciousInstruct
AUROC0.8617
20
Adversarial and Jailbreaking Attack DetectionHarmBench
AUROC0.8642
20
Showing 10 of 74 rows
...

Other info

Follow for update