Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

RLx2: Training a Sparse Deep Reinforcement Learning Model from Scratch

About

Training deep reinforcement learning (DRL) models usually requires high computation costs. Therefore, compressing DRL models possesses immense potential for training acceleration and model deployment. However, existing methods that generate small models mainly adopt the knowledge distillation-based approach by iteratively training a dense network. As a result, the training process still demands massive computing resources. Indeed, sparse training from scratch in DRL has not been well explored and is particularly challenging due to non-stationarity in bootstrap training. In this work, we propose a novel sparse DRL training framework, "the Rigged Reinforcement Learning Lottery" (RLx2), which builds upon gradient-based topology evolution and is capable of training a sparse DRL model based entirely on a sparse network. Specifically, RLx2 introduces a novel multi-step TD target mechanism with a dynamic-capacity replay buffer to achieve robust value learning and efficient topology exploration in sparse models. It also reaches state-of-the-art sparse training performance in several tasks, showing 7.5\times-20\times model compression with less than 3% performance degradation and up to 20\times and 50\times FLOPs reduction for training and inference, respectively.

Yiqin Tan, Pihe Hu, Ling Pan, Jiatai Huang, Longbo Huang• 2022

Related benchmarks

TaskDatasetResultRank
Multi-Agent Reinforcement LearningSMAC 6h* v1
Normalized Win Rate72.7
18
Multi-Agent Reinforcement LearningSMAC Avg. v1
Normalized Win Rate0.877
18
Multi-Agent Reinforcement LearningSMAC 3m v1
Normalized Win Rate98
18
Multi-Agent Reinforcement LearningSMAC 2s3z v1
Normalized Win Rate94
18
Multi-Agent Reinforcement LearningSMAC 3s5z v1
Normalized Win Rate86.2
18
Continuous ControlAnt v5
Normalized Mean Return1.01
12
Continuous ControlHalfcheetah v5
Normalized Mean Return0.95
12
Showing 7 of 7 rows

Other info

Follow for update