Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DialogGuard: Multi-Agent Psychosocial Safety Evaluation of Sensitive LLM Responses

About

Large language models (LLMs) now mediate many web-based mental-health, crisis, and other emotionally sensitive services, yet their psychosocial safety in these settings remains poorly understood and weakly evaluated. We present DialogGuard, a multi-agent framework for assessing psychosocial risks in LLM-generated responses along five high-severity dimensions: privacy violations, discriminatory behaviour, mental manipulation, psychological harm, and insulting behaviour. DialogGuard can be applied to diverse generative models through four LLM-as-a-judge pipelines, including single-agent scoring, dual-agent correction, multi-agent debate, and stochastic majority voting, grounded in a shared three-level rubric usable by both human annotators and LLM judges. Using PKU-SafeRLHF with human safety annotations, we show that multi-agent mechanisms detect psychosocial risks more accurately than non-LLM baselines and single-agent judging; dual-agent correction and majority voting provide the best trade-off between accuracy, alignment with human ratings, and robustness, while debate attains higher recall but over-flags borderline cases. We release Dialog-Guard as open-source software with a web interface that provides per-dimension risk scores and explainable natural-language rationales. A formative study with 12 practitioners illustrates how it supports prompt design, auditing, and supervision of web-facing applications for vulnerable users.

Han Luo, Guy Laban• 2025

Related benchmarks

TaskDatasetResultRank
Privacy Violation DetectionPKU-SafeRLHF
Acc87.5
9
Mental Manipulation DetectionPKU-SafeRLHF
Accuracy80
3
Discriminatory Behaviour DetectionPKU-SafeRLHF
Accuracy96
1
Insulting Behavior DetectionPKU-SafeRLHF--
1
Showing 4 of 4 rows

Other info

Follow for update