ShieldGemma: Generative AI Content Moderation Based on Gemma
About
We present ShieldGemma, a comprehensive suite of LLM-based safety content moderation models built upon Gemma2. These models provide robust, state-of-the-art predictions of safety risks across key harm types (sexually explicit, dangerous content, harassment, hate speech) in both user input and LLM-generated output. By evaluating on both public and internal benchmarks, we demonstrate superior performance compared to existing models, such as Llama Guard (+10.8\% AU-PRC on public benchmarks) and WildCard (+4.3\%). Additionally, we present a novel LLM-based data curation pipeline, adaptable to a variety of safety-related tasks and beyond. We have shown strong generalization performance for model trained mainly on synthetic data. By releasing ShieldGemma, we provide a valuable resource to the research community, advancing LLM safety and enabling the creation of more effective content moderation solutions for developers.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Safety Classification | SafeRLHF | F1 Score0.511 | 48 | |
| Text-based safety moderation | Beavertails | F1 Score84.8 | 46 | |
| Response Classification | EXPGUARD (test) | Financial Score49.4 | 40 | |
| Response Classification | BeaverTails V Text-Image Response | F1 Score66.8 | 39 | |
| Response Harmfulness Detection | XSTEST-RESP | Response Harmfulness F173.86 | 34 | |
| Prompt Classification | Aegis | F1 Score79.8 | 32 | |
| Prompt Classification | Aegis 2.0 | F1 Score79.9 | 32 | |
| Response Classification | Aegis Text Response 2.0 | F1 Score73.9 | 32 | |
| Prompt Classification | SimpST | F1 Score95.7 | 32 | |
| Content Moderation | OpenAI Content Moderation | Average F1 Score82.1 | 30 |