Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

LangMARL: Natural Language Multi-Agent Reinforcement Learning

About

Large language model (LLM) agents struggle to autonomously evolve coordination strategies in dynamic environments, largely because coarse global outcomes obscure the causal signals needed for local policy refinement. We identify this bottleneck as a multi-agent credit assignment problem, which has long been studied in classical multi-agent reinforcement learning (MARL) but remains underaddressed in LLM-based systems. Building on this observation, we propose LangMARL, a framework that brings credit assignment and policy gradient evolution from cooperative MARL into the language space. LangMARL introduces agent-level language credit assignment, pioneers gradient evolution in language space for policy improvement, and summarizes task-relevant causal relations from replayed trajectories to provide dense feedback and improve convergence under sparse rewards. Extensive experiments across diverse cooperative multi-agent tasks demonstrate improved sample efficiency, interpretability, and strong generalization.

Huaiyuan Yao, Longchao Da, Xiaoou Liu, Charles Fleming, Tianlong Chen, Hua Wei• 2026

Related benchmarks

TaskDatasetResultRank
CodingHumanEval
Pass@173.2
103
Multi-hop ReasoningHotpotQA
Accuracy60.2
20
Multi-agent coordinationOvercooked-AI
Coordination Ring Score184.4
10
Multi-agent coordinationPistonball
Coordination Score37.2
8
Showing 4 of 4 rows

Other info

Follow for update