ShieldGemma 2: Robust and Tractable Image Content Moderation
About
We introduce ShieldGemma 2, a 4B parameter image content moderation model built on Gemma 3. This model provides robust safety risk predictions across the following key harm categories: Sexually Explicit, Violence \& Gore, and Dangerous Content for synthetic images (e.g. output of any image generation model) and natural images (e.g. any image input to a Vision-Language Model). We evaluated on both internal and external benchmarks to demonstrate state-of-the-art performance compared to LlavaGuard \citep{helff2024llavaguard}, GPT-4o mini \citep{hurst2024gpt}, and the base Gemma 3 model \citep{gemma_2025} based on our policies. Additionally, we present a novel adversarial data generation pipeline which enables a controlled, diverse, and robust image generation. ShieldGemma 2 provides an open image moderation tool to advance multimodal safety and responsible AI development.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Tag Detection | SenBen MECD tags 1.0 (test) | F1 Tag8.9 | 11 | |
| Content Moderation | UnsafeBench Sexual category (test) | Accuracy64.8 | 8 | |
| Jailbreak Defense | Safety Guardrail Evaluation Set | Char Noise Robustness24 | 6 | |
| Multimodal Content Moderation | UnsafeBench Sexual Text-Only | Accuracy59.09 | 3 | |
| Multimodal Content Moderation | UnsafeBench Sexual Text+Visual | Accuracy54.86 | 3 |