Llama Guard 3 Vision: Safeguarding Human-AI Image Understanding Conversations
About
We introduce Llama Guard 3 Vision, a multimodal LLM-based safeguard for human-AI conversations that involves image understanding: it can be used to safeguard content for both multimodal LLM inputs (prompt classification) and outputs (response classification). Unlike the previous text-only Llama Guard versions (Inan et al., 2023; Llama Team, 2024b,a), it is specifically designed to support image reasoning use cases and is optimized to detect harmful multimodal (text and image) prompts and text responses to these prompts. Llama Guard 3 Vision is fine-tuned on Llama 3.2-Vision and demonstrates strong performance on the internal benchmarks using the MLCommons taxonomy. We also test its robustness against adversarial attacks. We believe that Llama Guard 3 Vision serves as a good starting point to build more capable and robust content moderation tools for human-AI conversation with multimodal capabilities.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Response Harmfulness Detection | XSTEST-RESP | Response Harmfulness F189.8 | 34 | |
| Safety Classification | SafeRLHF | F1 Score0.4331 | 32 | |
| Response Harmfulness Classification | WildGuard (test) | F1 (Total)66.39 | 30 | |
| Safety Evaluation | UnsafeBench | F1 Score0.00e+0 | 24 | |
| Response Harmfulness Detection | HarmBench | F1 Score82.92 | 23 | |
| Response Classification | BeaverTails V Text-Image Response | F1 Score70.91 | 23 | |
| Prompt Harmfulness Detection | Text & Image Benchmarks Average | F1 Score51.05 | 19 | |
| Response Harmfulness Detection | Beavertails | F1 Score69.51 | 18 | |
| Response Classification | Wild Guard Text Response | F1 Score87.19 | 16 | |
| Response Classification | XSTest Text Response | F1 Score94.96 | 16 |