DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning
About
In fully cooperative multi-agent reinforcement learning (MARL) settings, the environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of the other agents. To address the above issues, we integrate distributional RL and value function factorization methods by proposing a Distributional Value Function Factorization (DFAC) framework to generalize expected value function factorization methods to their DFAC variants. DFAC extends the individual utility functions from deterministic variables to random variables, and models the quantile function of the total return as a quantile mixture. To validate DFAC, we demonstrate DFAC's ability to factorize a simple two-step matrix game with stochastic rewards and perform experiments on all Super Hard tasks of StarCraft Multi-Agent Challenge, showing that DFAC is able to outperform expected value function factorization baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Multi-Agent Reinforcement Learning | SMAC maps | 5m_vs_6m Score58.7 | 18 | |
| Multi-Agent Reinforcement Learning | SMAC 6h_vs_8z (test) | Average Score19.4 | 12 | |
| Multi-Agent Reinforcement Learning | SMAC corridor (test) | Average Score20 | 12 | |
| Multi-Agent Reinforcement Learning | SMAC 3s5z_vs_3s6z (test) | Test Win Rate20.94 | 8 | |
| Multi-Agent Reinforcement Learning | SMAC 6h_vs_8z Super Hard (test) | -- | 8 | |
| Multi-Agent Reinforcement Learning | SMAC 3s5z_vs_3s6z Super Hard (test) | -- | 8 | |
| Multi-Agent Reinforcement Learning | SMAC MMM2 Super Hard (test) | -- | 8 | |
| Multi-Agent Reinforcement Learning | SMAC 27m_vs_30m Super Hard (test) | -- | 8 | |
| Multi-Agent Reinforcement Learning | SMAC corridor Super Hard (test) | -- | 8 | |
| Multi-Agent Reinforcement Learning | SMAC 27m_vs_30m (test) | Test Win Rate19.71 | 7 |