Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

UserLM-R1: Modeling Human Reasoning in User Language Models with Multi-Reward Reinforcement Learning

About

User simulators serve as the critical interactive environment for agent post-training, and an ideal user simulator generalizes across domains and proactively engages in negotiation by challenging or bargaining. However, current methods exhibit two issues. They rely on static and context-unaware profiles, necessitating extensive manual redesign for new scenarios, thus limiting generalizability. Moreover, they neglect human strategic thinking, leading to vulnerability to agent manipulation. To address these issues, we propose UserLM-R1, a novel user language model with reasoning capability. Specifically, we first construct comprehensive user profiles with both static roles and dynamic scenario-specific goals for adaptation to diverse scenarios. Then, we propose a goal-driven decision-making policy to generate high-quality rationales before producing responses, and further refine the reasoning and improve strategic capabilities with supervised fine-tuning and multi-reward reinforcement learning. Extensive experimental results demonstrate that UserLM-R1 outperforms competitive baselines, particularly on the more challenging adversarial set.

Feng Zhang, Shijia Li, Chunmao Zhang, Zhanyu Ma, Jun Xu, Jiuchong Gao, Jinghua Hao, Renqing He, Jingwen Xu, Han Liu• 2026

Related benchmarks

TaskDatasetResultRank
User SimulationUser Simulation Dataset Session-level (test)
Role Score95.21
11
User SimulationAdversarial User Simulation Dataset Turn-level (test)
Robotics Score14.1
11
User Simulation Quality AssessmentSession-level Human Evaluation Set adversarial (test)
Win Rate86
3
User Simulation Quality AssessmentTurn-level Human Evaluation Set adversarial (test)
Win Rate168
3
Showing 4 of 4 rows

Other info

Follow for update