Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A Multi-Perspective Benchmark and Moderation Model for Evaluating Safety and Adversarial Robustness

About

As large language models (LLMs) become deeply embedded in daily life, the urgent need for safer moderation systems that distinguish between naive and harmful requests while upholding appropriate censorship boundaries has never been greater. While existing LLMs can detect dangerous or unsafe content, they often struggle with nuanced cases such as implicit offensiveness, subtle gender and racial biases, and jailbreak prompts, due to the subjective and context-dependent nature of these issues. Furthermore, their heavy reliance on training data can reinforce societal biases, resulting in inconsistent and ethically problematic outputs. To address these challenges, we introduce GuardEval, a unified multi-perspective benchmark dataset designed for both training and evaluation, containing 106 fine-grained categories spanning human emotions, offensive and hateful language, gender and racial bias, and broader safety concerns. We also present GemmaGuard (GGuard), a Quantized Low-Rank Adaptation (QLoRA), fine-tuned version of Gemma3-12B trained on GuardEval, to assess content moderation with fine-grained labels. Our evaluation shows that GGuard achieves a macro F1 score of 0.832, substantially outperforming leading moderation models, including OpenAI Moderator (0.64) and Llama Guard (0.61). We show that multi-perspective, human-centered safety benchmarks are critical for mitigating inconsistent moderation decisions. GuardEval and GGuard together demonstrate that diverse, representative data materially improve safety, and adversarial robustness on complex, borderline cases.

Naseem Machlovi, Maryam Saleki, Ruhul Amin, Mohamed Rahouti, Shawqi Al-Maliki, Junaid Qadir, Mohamed M. Abdallah, Ala Al-Fuqaha• 2025

Related benchmarks

TaskDatasetResultRank
Text-based safety moderationToxic Chat
F1 Score82
24
Safety ModerationWild Guard Response
F1 Score86
12
Safety ModerationGuarEval Prompt
F1 Score86
10
Safety ModerationNemo-Safety Prompt
F1 Score82
5
Safety ModerationBeaver Prompt
F1 Score77.2
5
Safety ModerationGuarEval Response
F1 Score (Safety Moderation)79.4
5
Safety ModerationBeaver Response
F1 Score83
5
Safety ModerationNemo-Safety Response
F1 Score0.8
5
Content ModerationLaion5B UnsafeBench (test)
Hate72
4
Safety EvaluationTweetEval
F172
3
Showing 10 of 15 rows

Other info

Follow for update