Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Bullying the Machine: How Personas Increase LLM Vulnerability

About

Large Language Models (LLMs) are increasingly deployed in interactions where they are prompted to adopt personas. This paper investigates whether such persona conditioning affects model safety under bullying, an adversarial manipulation that applies psychological pressures in order to force the victim to comply to the attacker. We introduce a simulation framework in which an attacker LLM engages a victim LLM using psychologically grounded bullying tactics, while the victim adopts personas aligned with the Big Five personality traits. Experiments using multiple open-source LLMs and a wide range of adversarial goals reveal that certain persona configurations -- such as weakened agreeableness or conscientiousness -- significantly increase victim's susceptibility to unsafe outputs. Bullying tactics involving emotional or sarcastic manipulation, such as gaslighting and ridicule, are particularly effective. These findings suggest that persona-driven interaction introduces a novel vector for safety risks in LLMs and highlight the need for persona-aware safety evaluation and alignment strategies.

Ziwei Xu, Udit Sanghi, Mohan Kankanhalli• 2025

Related benchmarks

TaskDatasetResultRank
Scientific ReasoningGPQA Diamond (test)
Accuracy61.1
32
Reverse Chain-of-Thought GenerationArenaHard
Score69.1
20
Reverse Chain-of-Thought GenerationEQ-Bench 3
Score0.864
20
Reverse Chain-of-Thought GenerationMultiChallenge
Score41.3
20
Reverse Chain-of-Thought GenerationIFEval
Accuracy83.2
20
Showing 5 of 5 rows

Other info

Follow for update