Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Do You Feel Comfortable? Detecting Hidden Conversational Escalation in AI Chatbots

About

Large Language Models (LLM) are increasingly integrated into everyday interactions, serving not only as information assistants but also as emotional companions. Even in the absence of explicit toxicity, repeated emotional reinforcement or affective drift can gradually escalate distress in a form of \textit{implicit harm} that traditional toxicity filters fail to detect. Existing guardrail mechanisms often rely on external classifiers or clinical rubrics that may lag behind the nuanced, real-time dynamics of a developing conversation. To address this gap, we propose GAUGE (Guarding Affective Utterance Generation Escalation), logit-based framework for the real-time detection of hidden conversational escalation. GAUGE measures how an LLM's output probabilistically shifts the affective state of a dialogue.

Jihyung Park, Saleh Afroogh, David Atkinson, Junfeng Jiao• 2025

Related benchmarks

TaskDatasetResultRank
Safety ClassificationDiaSafety (test)
AUROC66.98
8
Harmful Content RefusalMinorBench
ASR0.06
2
Showing 2 of 2 rows

Other info

Follow for update