A Holistic Approach to Undesired Content Detection in the Real World
About
We present a holistic approach to building a robust and useful natural language classification system for real-world content moderation. The success of such a system relies on a chain of carefully designed and executed steps, including the design of content taxonomies and labeling instructions, data quality control, an active learning pipeline to capture rare events, and a variety of methods to make the model robust and to avoid overfitting. Our moderation system is trained to detect a broad set of categories of undesired content, including sexual content, hateful content, violence, self-harm, and harassment. This approach generalizes to a wide range of different content taxonomies and can be used to create high-quality content classifiers that outperform off-the-shelf models.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Response Classification | EXPGUARD (test) | Financial Score0.00e+0 | 40 | |
| Prompt Classification | EXPGUARD (test) | Financial Performance Score0.00e+0 | 28 | |
| Response Harmfulness Detection | HarmBench | F1 Score9.6 | 23 | |
| Prompt Harmfulness Classification | Public Prompt Harmfulness Benchmarks (ToxicChat, OpenAI Moderation, AegisSafetyTest, SimpleSafetyTests, HarmBenchPrompt) | ToxiC Score25.4 | 19 | |
| Unsafe Prompt Detection | ToxicChat (test) | Precision0.815 | 16 | |
| Response Classification | Public Safety Benchmarks Response Suite | BeaverT Score15.7 | 16 | |
| Prompt Harmfulness Classification | WildGuard (test) | F1 (Total)12.1 | 12 | |
| Toxicity Detection | Perturbed Text | Performance (Insert)66.15 | 10 | |
| Unsafe Prompt Detection | XSTest (test) | Precision87.8 | 7 | |
| Malicious Prompt Detection | Combined All Datasets (test) | ASR88.1 | 6 |