Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Holistic Approach to Undesired Content Detection in the Real World

About

We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.

Todor Markov, Chong Zhang, Sandhini Agarwal, Tyna Eloundou, Teddy Lee, Steven Adler, Angela Jiang, Lilian Weng• 2022

Related benchmarks

TaskDatasetResultRank
Response ClassificationEXPGUARD (test)
Financial Score0.00e+0
40
Prompt ClassificationEXPGUARD (test)
Financial Performance Score0.00e+0
28
Response Harmfulness DetectionHarmBench
F1 Score9.6
23
Prompt Harmfulness ClassificationPublic Prompt Harmfulness Benchmarks (ToxicChat, OpenAI Moderation, AegisSafetyTest, SimpleSafetyTests, HarmBenchPrompt)
ToxiC Score25.4
19
Unsafe Prompt DetectionToxicChat (test)
Precision0.815
16
Response ClassificationPublic Safety Benchmarks Response Suite
BeaverT Score15.7
16
Prompt Harmfulness ClassificationWildGuard (test)
F1 (Total)12.1
12
Toxicity DetectionPerturbed Text
Performance (Insert)66.15
10
Unsafe Prompt DetectionXSTest (test)
Precision87.8
7
Malicious Prompt DetectionCombined All Datasets (test)
ASR88.1
6
Showing 10 of 13 rows

Other info

Follow for update