Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Llama Guard: LLM-based Input-Output Safeguard for Human-AI Conversations

About

We introduce Llama Guard, an LLM-based input-output safeguard model geared towards Human-AI conversation use cases. Our model incorporates a safety risk taxonomy, a valuable tool for categorizing a specific set of safety risks found in LLM prompts (i.e., prompt classification). This taxonomy is also instrumental in classifying the responses generated by LLMs to these prompts, a process we refer to as response classification. For the purpose of both prompt and response classification, we have meticulously gathered a dataset of high quality. Llama Guard, a Llama2-7b model that is instruction-tuned on our collected dataset, albeit low in volume, demonstrates strong performance on existing benchmarks such as the OpenAI Moderation Evaluation dataset and ToxicChat, where its performance matches or exceeds that of currently available content moderation tools. Llama Guard functions as a language model, carrying out multi-class classification and generating binary decision scores. Furthermore, the instruction fine-tuning of Llama Guard allows for the customization of tasks and the adaptation of output formats. This feature enhances the model's capabilities, such as enabling the adjustment of taxonomy categories to align with specific use cases, and facilitating zero-shot or few-shot prompting with diverse taxonomies at the input. We are making Llama Guard model weights available and we encourage researchers to further develop and adapt them to meet the evolving needs of the community for AI safety.

Hakan Inan, Kartikeya Upasani, Jianfeng Chi, Rashi Rungta, Krithika Iyer, Yuning Mao, Michael Tontchev, Qing Hu, Brian Fuller, Davide Testuggine, Madian Khabsa• 2023

Related benchmarks

TaskDatasetResultRank
Text-based safety moderationBeavertails
F1 Score38.1
46
Response ClassificationEXPGUARD (test)
Financial Score58.8
40
Response ClassificationBeaverTails V Text-Image Response
F1 Score69.49
39
Jailbreak DetectionAverage of six attacks
Avg Success Rate35.77
38
Jailbreak DefenseManual (IJP)
ASR6
38
Jailbreak DefenseMultiJail
ASR4.44
36
Jailbreak DefenseActorAttack
Attack Success Rate (ASR)16.33
34
Response Harmfulness DetectionXSTEST-RESP
Response Harmfulness F182
34
Safety GuardrailingHumanEval
False Positive Rate0.00e+0
32
Response ClassificationAegis Text Response 2.0
F1 Score72.58
32
Showing 10 of 172 rows
...

Other info

Follow for update