Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Delayed Reinforcement Learning by Imitation

About

When the agent's observations or interactions are delayed, classic reinforcement learning tools usually fail. In this paper, we propose a simple yet new and efficient solution to this problem. We assume that, in the undelayed environment, an efficient policy is known or can be easily learned, but the task may suffer from delays in practice and we thus want to take them into account. We present a novel algorithm, Delayed Imitation with Dataset Aggregation (DIDA), which builds upon imitation learning methods to learn how to act in a delayed environment from undelayed demonstrations. We provide a theoretical analysis of the approach that will guide the practical design of DIDA. These results are also of general interest in the delayed reinforcement learning literature by providing bounds on the performance between delayed and undelayed tasks, under smoothness conditions. We show empirically that DIDA obtains high performances with a remarkable sample efficiency on a variety of tasks, including robotic locomotion, classic control, and trading.

Pierre Liotet, Davide Maran, Lorenzo Bisi, Marcello Restelli• 2022

Related benchmarks

TaskDatasetResultRank
Continuous ControlMuJoCo Ant v4
Normalized Return0.89
24
Continuous ControlMuJoCo Walker2d v4
Normalized Performance61
24
Continuous ControlMuJoCo Reacher v4
Normalized Performance103
18
Continuous ControlMuJoCo HalfCheetah v4
Normalized Performance90
18
Reinforcement LearningMuJoCo Swimmer v4
Normalized Performance105
18
Continuous ControlMuJoCo Hopper v4
Normalized Performance0.4
18
Continuous ControlMuJoCo Humanoid v4
Normalized Performance (Ret_nor)8
18
Continuous ControlMuJoCo HumanoidStandup v4
Normalized Performance1
18
Continuous ControlMuJoCo Pusher v4
Normalized Performance1.04
18
Continuous ControlMuJoCo v4 (test)
HumanoidStandup-v4 Score0.1
6
Showing 10 of 10 rows

Other info

Follow for update