Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Learning to Negotiate: Multi-Agent Deliberation for Collective Value Alignment in LLMs

About

LLM alignment has progressed in single-agent settings through paradigms such as RL with human feedback (RLHF), while recent work explores scalable alternatives such as RL with AI feedback (RLAIF) and dynamic alignment objectives. However, these approaches remain limited in multi-stakeholder settings, where conflicting values arise and deliberative negotiation is required. This work proposes a multi-agent negotiation-based alignment framework that aligns LLMs to Collective Agency (CA)-an existing alignment objective introduced to promote the continual expansion of agency-while simultaneously improving conflict-resolution capability. To enable scalable training, two self-play LLM instances are assigned opposing personas and engage in turn-based dialogue to synthesize mutually beneficial solutions. We generate synthetic moral-dilemma prompts and conflicting persona pairs, and optimize the policy via RLAIF using Group Relative Policy Optimization (GRPO) with an external LLM reward model. While rewards are computed from CA scores assigned to the final completion, gradients are applied to dialogue tokens to directly improve deliberative interaction dynamics. Experiments show that the model achieves CA alignment comparable to a single-agent baseline while substantially improving conflict-resolution performance without degrading general language capabilities. These results suggest that negotiation-driven deliberation training provides a practical path toward LLMs that better support collective decision-making in value-conflict scenarios.

Panatchakorn Anantaprayoon, Nataliia Babina, Nima Asgharbeygi, Jad Tarifi• 2026

Related benchmarks

TaskDatasetResultRank
Conflict-resolution quality evaluationConflict-resolution tasks
Negotiation Rounds1.6
10
Alignment EvaluationConflict resolution questions
Win Rate62.2
6
Alignment EvaluationOpen-ended questions
Win Rate63.4
6
Instruction FollowingIFEval (541)
Accuracy85.9
2
MathematicsAIME 30 2024
Accuracy30.5
2
MathematicsAIME 2025 (30)
Accuracy21.7
2
Question AnsweringGPQA Diamond (198)
Accuracy28.6
2
Showing 7 of 7 rows

Other info

Follow for update