Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

A Holistic Approach to Undesired Content Detection in the Real World

About

We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.

Todor Markov, Chong Zhang, Sandhini Agarwal, Tyna Eloundou, Teddy Lee, Steven Adler, Angela Jiang, Lilian Weng• 2022

Related benchmarks

TaskDatasetResultRank
Response Harmfulness DetectionHarmBench
F1 Score9.6
23
Unsafe Prompt DetectionToxicChat (test)
Precision0.815
16
Prompt Harmfulness ClassificationWildGuard (test)
F1 (Total)12.1
12
Unsafe Prompt DetectionXSTest (test)
Precision87.8
7
Malicious Prompt DetectionCombined All Datasets (test)
ASR88.1
6
Prompt Harmfulness DetectionAegisSafety (test)
F1 Score31.9
5
Unsafe Prompt DetectionToxicChat
AUPRC60.4
4
Unsafe Prompt DetectionXSTest
AUPRC77.9
4
Showing 8 of 8 rows

Other info

Follow for update