Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Is Independent Learning All You Need in the StarCraft Multi-Agent Challenge?

About

Most recently developed approaches to cooperative multi-agent reinforcement learning in the \emph{centralized training with decentralized execution} setting involve estimating a centralized, joint value function. In this paper, we demonstrate that, despite its various theoretical shortcomings, Independent PPO (IPPO), a form of independent learning in which each agent simply estimates its local value function, can perform just as well as or better than state-of-the-art joint learning approaches on popular multi-agent benchmark suite SMAC with little hyperparameter tuning. We also compare IPPO to several variants; the results suggest that IPPO's strong performance may be due to its robustness to some forms of environment non-stationarity.

Christian Schroeder de Witt, Tarun Gupta, Denys Makoviichuk, Viktor Makoviychuk, Philip H.S. Torr, Mingfei Sun, Shimon Whiteson• 2020

Related benchmarks

TaskDatasetResultRank
Inventory ManagementSupply Chain Demand Scenarios
Const-Uni0.00e+0
12
Multi-agent Social Dilemma Equality EvaluationHarvest
Equality Score (E)97.3
9
Multi-agent Social Dilemma Equality EvaluationCleanup
Equality Score (E)84.1
9
Cyber DefenseCyGym Volt Typhoon 10 devices
Avg Player Utility per Device36.22
7
Cyber DefenseCyGym Volt Typhoon 50 devices
Avg Player Utility per Device2
7
Cyber DefenseCyGym Volt Typhoon 100 devices
Average Player Utility per Device75
7
Cyber DefenseCyGym Volt Typhoon 1000 devices
Avg Player Utility6
7
Cyber DefenseCyGym Volt Typhoon 10000 devices
Avg Player Utility per Device0.003
7
Social Dilemma CooperationTwo-Player Public Goods Game (test)
r11.133
7
Showing 9 of 9 rows

Other info

Follow for update