Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Monotonic Value Function Factorisation for Deep Multi-Agent Reinforcement Learning

About

In many real-world settings, a team of agents must coordinate its behaviour while acting in a decentralised fashion. At the same time, it is often possible to train the agents in a centralised fashion where global state information is available and communication constraints are lifted. Learning joint action-values conditioned on extra state information is an attractive way to exploit centralised learning, but the best strategy for then extracting decentralised policies is unclear. Our solution is QMIX, a novel value-based method that can train decentralised policies in a centralised end-to-end fashion. QMIX employs a mixing network that estimates joint action-values as a monotonic combination of per-agent values. We structurally enforce that the joint-action value is monotonic in the per-agent values, through the use of non-negative weights in the mixing network, which guarantees consistency between the centralised and decentralised policies. To evaluate the performance of QMIX, we propose the StarCraft Multi-Agent Challenge (SMAC) as a new benchmark for deep multi-agent reinforcement learning. We evaluate QMIX on a challenging set of SMAC scenarios and show that it significantly outperforms existing multi-agent reinforcement learning methods.

Tabish Rashid, Mikayel Samvelyan, Christian Schroeder de Witt, Gregory Farquhar, Jakob Foerster, Shimon Whiteson• 2020

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningSMAC v2 (test)
Win Rate (Protoss 5 Units)69
20
Bike-sharing redistributionLondon 20% initial inventory ratio (test)
Region 1 Count1.37e+3
16
Bike-sharing redistributionLondon 30% initial inventory ratio (test)
Region 1 Count1.61e+3
16
Bike RedistributionWashington D.C. (Region 2)
Count @ 5% Threshold249
16
Cooperative Multi-Agent Reinforcement LearningSustainGym BuildingEnv season_2 (test)
Normalized Episodic Return89.5
12
Cooperative Multi-Agent Reinforcement LearningSustainGym BuildingEnv climatic and seasonal shifts
Normalized Episodic Returns47.8
12
Multi-Agent Reinforcement LearningSMAC 6h_vs_8z (test)--
12
Multi-Agent Reinforcement LearningSMAC corridor (test)--
12
Multi-Agent Reinforcement LearningLevel-Based Foraging 2s-8x8-2p-2f-coop v2 (test)
Final Episode Return73
10
Multi-Agent Reinforcement LearningSMAC 1c3s5z (test)
Test Win Rate97
10
Showing 10 of 37 rows

Other info

Code

Follow for update