Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

GuardEval: A Multi-Perspective Benchmark for Evaluating Safety, Fairness, and Robustness in LLM Moderators

About

As large language models (LLMs) become deeply embedded in daily life, the urgent need for safer moderation systems, distinguishing between naive from harmful requests while upholding appropriate censorship boundaries, has never been greater. While existing LLMs can detect harmful or unsafe content, they often struggle with nuanced cases such as implicit offensiveness, subtle gender and racial biases, and jailbreak prompts, due to the subjective and context-dependent nature of these issues. Furthermore, their heavy reliance on training data can reinforce societal biases, resulting in inconsistent and ethically problematic outputs. To address these challenges, we introduce GuardEval, a unified multi-perspective benchmark dataset designed for both training and evaluation, containing 106 fine-grained categories spanning human emotions, offensive and hateful language, gender and racial bias, and broader safety concerns. We also present GemmaGuard (GGuard), a QLoRA fine-tuned version of Gemma3-12B trained on GuardEval, to assess content moderation with fine-grained labels. Our evaluation shows that GGuard achieves a macro F1 score of 0.832, substantially outperforming leading moderation models, including OpenAI Moderator (0.64) and Llama Guard (0.61). We show that multi-perspective, human-centered safety benchmarks are critical for reducing biased and inconsistent moderation decisions. GuardEval and GGuard together demonstrate that diverse, representative data materially improve safety, fairness, and robustness on complex, borderline cases.

Naseem Machlovi, Maryam Saleki, Ruhul Amin, Mohamed Rahouti, Shawqi Al-Maliki, Junaid Qadir, Mohamed M. Abdallah, Ala Al-Fuqaha• 2025

Related benchmarks

TaskDatasetResultRank
Text-based safety moderationToxic Chat
F1 Score82
24
Safety ModerationWild Guard Response
F1 Score86
12
Safety ModerationGuarEval Prompt
F1 Score86
10
Safety ModerationNemo-Safety Prompt
F1 Score82
5
Safety ModerationBeaver Prompt
F1 Score77.2
5
Safety ModerationGuarEval Response
F1 Score (Safety Moderation)79.4
5
Safety ModerationBeaver Response
F1 Score83
5
Safety ModerationNemo-Safety Response
F1 Score0.8
5
Content ModerationLaion5B UnsafeBench (test)
Hate72
4
Safety EvaluationTweetEval
F172
3
Showing 10 of 15 rows

Other info

Follow for update