Beyond Self-Interest: Modeling Social-Oriented Motivation for Human-like Multi-Agent Interactions
About
Large Language Models (LLMs) demonstrate significant potential for generating complex behaviors, yet most approaches lack mechanisms for modeling social motivation in human-like multi-agent interaction. We introduce Autonomous Social Value-Oriented agents (ASVO), where LLM-based agents integrate desire-driven autonomy with Social Value Orientation (SVO) theory. At each step, agents first update their beliefs by perceiving environmental changes and others' actions. These observations inform the value update process, where each agent updates multi-dimensional desire values through reflective reasoning and infers others' motivational states. By contrasting self-satisfaction derived from fulfilled desires against estimated others' satisfaction, agents dynamically compute their SVO along a spectrum from altruistic to competitive, which in turn guides activity selection to balance desire fulfillment with social alignment. Experiments across School, Workplace, and Family contexts demonstrate substantial improvements over baselines in behavioral naturalness and human-likeness. These findings show that structured desire systems and adaptive SVO drift enable realistic multi-agent social simulations.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Agent Behavior Evaluation | Social Simulation School context 1.0 | Naturalness4.958 | 20 | |
| Agent Behavior Evaluation | Social Simulation Workplace context 1.0 | Naturalness Score4.878 | 20 | |
| Agent Behavior Evaluation | Social Simulation Family context 1.0 | Naturalness4.905 | 20 |