Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?
About
Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function. In this paper, we demonstrate that, despite its various theoretical shortcomings, Independent PPO (IPPO), a form of independent learning in which each agent simply estimates its local value function, can perform just as well as or better than state-of-the-art joint learning approaches on popular multi-agent benchmark suite SMAC with little hyperparameter tuning. We also compare IPPO to several variants; the results suggest that IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.
Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H.S. Torr, Mingfei Sun, Shimon Whiteson• 2020
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Mathematical Reasoning | GSM8K | -- | 499 | |
| Math Reasoning | MATH500 | Pass@1 Rate85.1 | 58 | |
| Math Reasoning | AMC23 | Pass@182.5 | 51 | |
| Mathematical Reasoning | AIME24 | Pass@13.3 | 38 | |
| Inventory Management | Supply Chain Demand Scenarios | Const-Uni0.00e+0 | 12 | |
| Math Reasoning | AIME 25 | Pass@113.3 | 12 | |
| Multi-Agent Reinforcement Learning | VMAS | -- | 10 | |
| Multi-agent Social Dilemma Equality Evaluation | Harvest | Equality Score (E)97.3 | 9 | |
| Multi-agent Social Dilemma Equality Evaluation | Cleanup | Equality Score (E)84.1 | 9 | |
| Cyber Defense | CyGym Volt Typhoon 10 devices | Avg Player Utility per Device36.22 | 7 |
Showing 10 of 20 rows