Learning to Negotiate: Multi-Agent Deliberation for Collective Value Alignment in LLMs
About
LLM alignment has progressed in single-agent settings through paradigms such as RL with human feedback (RLHF), while recent work explores scalable alternatives such as RL with AI feedback (RLAIF) and dynamic alignment objectives. However, these approaches remain limited in multi-stakeholder settings, where conflicting values arise and deliberative negotiation is required. This work proposes a multi-agent negotiation-based alignment framework that aligns LLMs to Collective Agency (CA)-an existing alignment objective introduced to promote the continual expansion of agency-while simultaneously improving conflict-resolution capability. To enable scalable training, two self-play LLM instances are assigned opposing personas and engage in turn-based dialogue to synthesize mutually beneficial solutions. We generate synthetic moral-dilemma prompts and conflicting persona pairs, and optimize the policy via RLAIF using Group Relative Policy Optimization (GRPO) with an external LLM reward model. While rewards are computed from CA scores assigned to the final completion, gradients are applied to dialogue tokens to directly improve deliberative interaction dynamics. Experiments show that the model achieves CA alignment comparable to a single-agent baseline while substantially improving conflict-resolution performance without degrading general language capabilities. These results suggest that negotiation-driven deliberation training provides a practical path toward LLMs that better support collective decision-making in value-conflict scenarios.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Conflict-resolution quality evaluation | Conflict-resolution tasks | Negotiation Rounds1.6 | 10 | |
| Alignment Evaluation | Conflict resolution questions | Win Rate62.2 | 6 | |
| Alignment Evaluation | Open-ended questions | Win Rate63.4 | 6 | |
| Instruction Following | IFEval (541) | Accuracy85.9 | 2 | |
| Mathematics | AIME 30 2024 | Accuracy30.5 | 2 | |
| Mathematics | AIME 2025 (30) | Accuracy21.7 | 2 | |
| Question Answering | GPQA Diamond (198) | Accuracy28.6 | 2 |