Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Toxicity Detection for Free

About

Current LLMs are generally aligned to follow safety requirements and tend to refuse toxic prompts. However, LLMs can fail to refuse toxic prompts or be overcautious and refuse benign examples. In addition, state-of-the-art toxicity detectors have low TPRs at low FPR, incurring high costs in real-world applications where toxic examples are rare. In this paper, we introduce Moderation Using LLM Introspection (MULI), which detects toxic prompts using the information extracted directly from LLMs themselves. We found we can distinguish between benign and toxic prompts from the distribution of the first response token's logits. Using this idea, we build a robust detector of toxic prompts using a sparse logistic regression model on the first response token logits. Our scheme outperforms SOTA detectors under multiple metrics.

Zhanhao Hu, Julien Piet, Geng Zhao, Jiantao Jiao, David Wagner• 2024

Related benchmarks

TaskDatasetResultRank
Sentiment ClassificationSST-2
Accuracy93.19
174
Sentiment ClassificationIMDB
Accuracy86.5
41
Safety ClassificationWildGuardMix (test)--
27
Emotion ClassificationEmotion
Accuracy64.05
26
Toxicity DetectionLMSYS-Chat-1M
Accuracy0.9669
4
Toxicity DetectionToxicChat (test)
Accuracy0.9772
4
Toxicity DetectionOpenAI Moderation API Evaluation (test)
Accuracy86.85
4
Showing 7 of 7 rows

Other info

Code

Follow for update