Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

DFAC Framework: Factorizing the Value Function via Quantile Mixture for Multi-Agent Distributional Q-Learning

About

In fully cooperative multi-agent reinforcement learning (MARL) settings, the environments are highly stochastic due to the partial observability of each agent and the continuously changing policies of the other agents. To address the above issues, we integrate distributional RL and value function factorization methods by proposing a Distributional Value Function Factorization (DFAC) framework to generalize expected value function factorization methods to their DFAC variants. DFAC extends the individual utility functions from deterministic variables to random variables, and models the quantile function of the total return as a quantile mixture. To validate DFAC, we demonstrate DFAC's ability to factorize a simple two-step matrix game with stochastic rewards and perform experiments on all Super Hard tasks of StarCraft Multi-Agent Challenge, showing that DFAC is able to outperform expected value function factorization baselines.

Wei-Fang Sun, Cheng-Kuang Lee, Chun-Yi Lee• 2021

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningSMAC maps
5m_vs_6m Score58.7
18
Multi-Agent Reinforcement LearningSMAC 6h_vs_8z (test)
Average Score19.4
12
Multi-Agent Reinforcement LearningSMAC corridor (test)
Average Score20
12
Multi-Agent Reinforcement LearningSMAC 3s5z_vs_3s6z (test)
Test Win Rate20.94
8
Multi-Agent Reinforcement LearningSMAC 6h_vs_8z Super Hard (test)--
8
Multi-Agent Reinforcement LearningSMAC 3s5z_vs_3s6z Super Hard (test)--
8
Multi-Agent Reinforcement LearningSMAC MMM2 Super Hard (test)--
8
Multi-Agent Reinforcement LearningSMAC 27m_vs_30m Super Hard (test)--
8
Multi-Agent Reinforcement LearningSMAC corridor Super Hard (test)--
8
Multi-Agent Reinforcement LearningSMAC 27m_vs_30m (test)
Test Win Rate19.71
7
Showing 10 of 11 rows

Other info

Code

Follow for update