Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Boosting Reinforcement Learning with Strongly Delayed Feedback Through Auxiliary Short Delays

About

Reinforcement learning (RL) is challenging in the common case of delays between events and their sensory perceptions. State-of-the-art (SOTA) state augmentation techniques either suffer from state space explosion or performance degeneration in stochastic environments. To address these challenges, we present a novel Auxiliary-Delayed Reinforcement Learning (AD-RL) method that leverages auxiliary tasks involving short delays to accelerate RL with long delays, without compromising performance in stochastic environments. Specifically, AD-RL learns a value function for short delays and uses bootstrapping and policy improvement techniques to adjust it for long delays. We theoretically show that this can greatly reduce the sample complexity. On deterministic and stochastic benchmarks, our method significantly outperforms the SOTAs in both sample efficiency and policy performance. Code is available at https://github.com/QingyuanWuNothing/AD-RL.

Qingyuan Wu, Simon Sinong Zhan, Yixuan Wang, Yuhui Wang, Chung-Wei Lin, Chen Lv, Qi Zhu, J\"urgen Schmidhuber, Chao Huang• 2024

Related benchmarks

TaskDatasetResultRank
Continuous ControlMuJoCo Ant v4
Normalized Return0.72
24
Continuous ControlMuJoCo Walker2d v4
Normalized Performance112
24
Continuous ControlMuJoCo HalfCheetah v4
Normalized Performance107
18
Continuous ControlMuJoCo Pusher v4
Normalized Performance1.36
18
Reinforcement LearningMuJoCo Swimmer v4
Normalized Performance271
18
Continuous ControlMuJoCo Humanoid v4
Normalized Performance (Ret_nor)98
18
Continuous ControlMuJoCo HumanoidStandup v4
Normalized Performance1.22
18
Continuous ControlMuJoCo Reacher v4
Normalized Performance103
18
Continuous ControlMuJoCo Hopper v4
Normalized Performance1.07
18
Continuous ControlMuJoCo v4 (test)
HumanoidStandup-v4 Score0.14
6
Showing 10 of 10 rows

Other info

Follow for update