Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Value-Based Deep Multi-Agent Reinforcement Learning with Dynamic Sparse Training

About

Deep Multi-agent Reinforcement Learning (MARL) relies on neural networks with numerous parameters in multi-agent scenarios, often incurring substantial computational overhead. Consequently, there is an urgent need to expedite training and enable model compression in MARL. This paper proposes the utilization of dynamic sparse training (DST), a technique proven effective in deep supervised learning tasks, to alleviate the computational burdens in MARL training. However, a direct adoption of DST fails to yield satisfactory MARL agents, leading to breakdowns in value learning within deep sparse value-based MARL models. Motivated by this challenge, we introduce an innovative Multi-Agent Sparse Training (MAST) framework aimed at simultaneously enhancing the reliability of learning targets and the rationality of sample distribution to improve value learning in sparse models. Specifically, MAST incorporates the Soft Mellowmax Operator with a hybrid TD-($\lambda$) schema to establish dependable learning targets. Additionally, it employs a dual replay buffer mechanism to enhance the distribution of training samples. Building upon these aspects, MAST utilizes gradient-based topology evolution to exclusively train multiple MARL agents using sparse networks. Our comprehensive experimental investigation across various value-based MARL algorithms on multiple benchmarks demonstrates, for the first time, significant reductions in redundancy of up to $20\times$ in Floating Point Operations (FLOPs) for both training and inference, with less than $3\%$ performance degradation.

Pihe Hu, Shaolong Li, Zhuoran Li, Ling Pan, Longbo Huang• 2024

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningSMAC 3m v1
Normalized Win Rate100.9
18
Multi-Agent Reinforcement LearningSMAC 2s3z v1
Normalized Win Rate100.2
18
Multi-Agent Reinforcement LearningSMAC 3s5z v1
Normalized Win Rate99.4
18
Multi-Agent Reinforcement LearningSMAC 6h* v1
Normalized Win Rate104.9
18
Multi-Agent Reinforcement LearningSMAC Avg. v1
Normalized Win Rate1.006
18
Showing 5 of 5 rows

Other info

Follow for update