QPLEX: Duplex Dueling Multi-Agent Q-Learning
About
We explore value-based multi-agent reinforcement learning (MARL) in the popular paradigm of centralized training with decentralized execution (CTDE). CTDE has an important concept, Individual-Global-Max (IGM) principle, which requires the consistency between joint and local action selections to support efficient local decision-making. However, in order to achieve scalability, existing MARL methods either limit representation expressiveness of their value function classes or relax the IGM consistency, which may suffer from instability risk or may not perform well in complex domains. This paper presents a novel MARL approach, called duPLEX dueling multi-agent Q-learning (QPLEX), which takes a duplex dueling network architecture to factorize the joint value function. This duplex dueling structure encodes the IGM principle into the neural network architecture and thus enables efficient value function learning. Theoretical analysis shows that QPLEX achieves a complete IGM function class. Empirical experiments on StarCraft II micromanagement tasks demonstrate that QPLEX significantly outperforms state-of-the-art baselines in both online and offline data collection settings, and also reveal that QPLEX achieves high sample efficiency and can benefit from offline datasets without additional online exploration.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-Agent Reinforcement Learning | SMAC maps | 5m_vs_6m Score5 | 18 | |
| Multi-Agent Reinforcement Learning | SMAC 2s_vs_1sc v1 (test) | Win Rate98.4 | 9 | |
| Multi-Agent Reinforcement Learning | SMAC 3s5z_vs_3s6z v1 (test) | Win Rate10.2 | 9 | |
| Multi-Agent Reinforcement Learning | SMAC corridor v1 (test) | Win Rate0.00e+0 | 9 | |
| Multi-Agent Reinforcement Learning | SMAC 3s5z_vs_3s6z Super Hard (test) | Averaged Score20.42 | 8 | |
| Multi-Agent Reinforcement Learning | SMAC 6h_vs_8z Super Hard (test) | Averaged Score15.95 | 8 | |
| Multi-Agent Reinforcement Learning | SMAC MMM2 Super Hard (test) | Averaged Score19.6 | 8 | |
| Multi-Agent Reinforcement Learning | SMAC 27m_vs_30m Super Hard (test) | Averaged Score19.33 | 8 | |
| Multi-Agent Reinforcement Learning | SMAC corridor Super Hard (test) | Average Score18.73 | 8 | |
| Multi-Agent Reinforcement Learning | SMAC Super Hard (test) | 6h_vs_8z Win Rate0.00e+0 | 8 |