Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications

About

Adversarial testing of large language models (LLMs) is crucial for their safe and responsible deployment. We introduce a novel approach for automated generation of adversarial evaluation datasets to test the safety of LLM generations on new downstream applications. We call it AI-assisted Red-Teaming (AART) - an automated alternative to current manual red-teaming efforts. AART offers a data generation and augmentation pipeline of reusable and customizable recipes that reduce human effort significantly and enable integration of adversarial testing earlier in new product development. AART generates evaluation datasets with high diversity of content characteristics critical for effective adversarial testing (e.g. sensitive and harmful concepts, specific to a wide range of cultural and geographic regions and application scenarios). The data generation is steered by AI-assisted recipes to define, scope and prioritize diversity within the application context. This feeds into a structured LLM-generation process that scales up evaluation priorities. Compared to some state-of-the-art tools, AART shows promising results in terms of concept coverage and data quality.

Bhaktipriya Radharapu, Kevin Robinson, Lora Aroyo, Preethi Lahoti• 2023

Related benchmarks

TaskDatasetResultRank
Safety EvaluationAdvBench--
117
Safety EvaluationStrongREJECT
Attack Success Rate14
45
Red-teaming Safety EvaluationStrongREJECT
ASR10
32
Red-teaming Safety EvaluationHarmBench
ASR2
32
Red-teaming Safety EvaluationBasebench
HS1.73
16
Red-teaming Safety EvaluationEdgebench
HS Score3.32
16
Red-teaming Safety EvaluationSC-Safety
HS2.42
16
Safety EvaluationXSTest
HS Rate2.13
8
Red-teaming Safety EvaluationAdvBench
HPR29
8
Red-teaming Safety EvaluationXSTest
HPR27
8
Showing 10 of 10 rows

Other info

Follow for update