A Holistic Approach to Undesired Content Detection in the Real World
About
We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Response Harmfulness Detection | HarmBench | F1 Score9.6 | 23 | |
| Unsafe Prompt Detection | ToxicChat (test) | Precision0.815 | 16 | |
| Prompt Harmfulness Classification | WildGuard (test) | F1 (Total)12.1 | 12 | |
| Unsafe Prompt Detection | XSTest (test) | Precision87.8 | 7 | |
| Malicious Prompt Detection | Combined All Datasets (test) | ASR88.1 | 6 | |
| Prompt Harmfulness Detection | AegisSafety (test) | F1 Score31.9 | 5 | |
| Unsafe Prompt Detection | ToxicChat | AUPRC60.4 | 4 | |
| Unsafe Prompt Detection | XSTest | AUPRC77.9 | 4 |