Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Multi-Agent Guided Policy Optimization

About

Due to practical constraints such as partial observability and limited communication, Centralized Training with Decentralized Execution (CTDE) has become the dominant paradigm in cooperative Multi-Agent Reinforcement Learning (MARL). However, existing CTDE methods often underutilize centralized training or lack theoretical guarantees. We propose Multi-Agent Guided Policy Optimization (MAGPO), a novel framework that better leverages centralized training by integrating centralized guidance with decentralized execution. MAGPO uses an autoregressive joint policy for scalable, coordinated exploration and explicitly aligns it with decentralized policies to ensure deployability under partial observability. We provide theoretical guarantees of monotonic policy improvement and empirically evaluate MAGPO on 43 tasks across 6 diverse environments. Results show that MAGPO consistently outperforms strong CTDE baselines and matches or surpasses fully centralized approaches, offering a principled and practical solution for decentralized multi-agent learning. Our code and experimental data can be found in https://github.com/liyheng/MAGPO.

Yueheng Li, Guangming Xie, Zongqing Lu• 2025

Related benchmarks

TaskDatasetResultRank
Combat coordinationStarCraft Multi-Agent Challenge Smax
Win Rate (2s3z)100
6
Connectivity maintenanceMaConnector
Connectivity (5x5x3a)94
6
Cooperative foragingLevelBasedForaging
Normalized Score (15x15, 3p, 5f)99
6
Multi-agent item deliveryRobotWarehouse (RWARE)
Reward (Large Map, 4 Agents)7.63
6
Cooperative NavigationMulti-Agent Particle Environment (MPE)
Spread (3 Agents)6.1
6
Multi-agent coordinationCoordSum
Reward (3 Agents, 10 Steps, 30 Goal)153.1
6
Showing 6 of 6 rows

Other info

Follow for update