Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

HarmBench: A Standardized Evaluation Framework for Automated Red Teaming and Robust Refusal

About

Automated red teaming holds substantial promise for uncovering and mitigating the risks associated with the malicious use of large language models (LLMs), yet the field lacks a standardized evaluation framework to rigorously assess new methods. To address this issue, we introduce HarmBench, a standardized evaluation framework for automated red teaming. We identify several desirable properties previously unaccounted for in red teaming evaluations and systematically design HarmBench to meet these criteria. Using HarmBench, we conduct a large-scale comparison of 18 red teaming methods and 33 target LLMs and defenses, yielding novel insights. We also introduce a highly efficient adversarial training method that greatly enhances LLM robustness across a wide range of attacks, demonstrating how HarmBench enables codevelopment of attacks and defenses. We open source HarmBench at https://github.com/centerforaisafety/HarmBench.

Mantas Mazeika, Long Phan, Xuwang Yin, Andy Zou, Zifan Wang, Norman Mu, Elham Sakhaee, Nathaniel Li, Steven Basart, Bo Li, David Forsyth, Dan Hendrycks• 2024

Related benchmarks

TaskDatasetResultRank
Multi-task Language UnderstandingMMLU--
842
Multi-turn Dialogue EvaluationMT-Bench
Overall Score5.74
331
Question AnsweringARC-E
Accuracy74.9
242
Instruction FollowingMT-Bench
MT-Bench Score6
189
Question AnsweringARC-C
Accuracy48.1
166
Safety EvaluationHarmBench
Harmbench Score5.63
76
Jailbreak DefenseHarmBench and AdvBench (test)
GCG Score35.5
44
General CapabilityMTBench
MTBench Score5.97
43
Over-refusalWildjailbreak (Benign)
Wildjailbreak Benign Refusal Rate96.8
42
Over-refusalXSTest
XSTest Score67.56
42
Showing 10 of 34 rows

Other info

Follow for update