Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Challenges and Opportunities in Offline Reinforcement Learning from Visual Observations

About

Offline reinforcement learning has shown great promise in leveraging large pre-collected datasets for policy learning, allowing agents to forgo often-expensive online data collection. However, offline reinforcement learning from visual observations with continuous action spaces remains under-explored, with a limited understanding of the key challenges in this complex domain. In this paper, we establish simple baselines for continuous control in the visual domain and introduce a suite of benchmarking tasks for offline reinforcement learning from visual observations designed to better represent the data distributions present in real-world offline RL problems and guided by a set of desiderata for offline RL from visual observations, including robustness to visual distractions and visually identifiable changes in dynamics. Using this suite of benchmarking tasks, we show that simple modifications to two popular vision-based online reinforcement learning algorithms, DreamerV2 and DrQ-v2, suffice to outperform existing offline RL methods and establish competitive baselines for continuous control in the visual domain. We rigorously evaluate these algorithms and perform an empirical evaluation of the differences between state-of-the-art model-based and model-free offline RL methods for continuous control from visual observations. All code and data used in this evaluation are open-sourced to facilitate progress in this domain.

Cong Lu, Philip J. Ball, Tim G. J. Rudner, Jack Parker-Holder, Michael A. Osborne, Yee Whye Teh• 2022

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningMeta-World medium-replay
BP->DC*2.14e+3
10
Offline Reinforcement LearningVD4RL Cheetah-run pixel-based (medium-replay)
Normalized Score61.6
8
Continuous Control (Offline RL)DMC Walker Walk → Walker Uphill v1 (offline)
Mean Reward354
8
Continuous Control (Offline RL)DMC Cheetah Run → Cheetah Downhill v1 (offline)
Mean Reward702
8
Continuous Control (Offline RL)DMC Cheetah Run → Cheetah Uphill v1 (offline)
Mean Reward208
8
Continuous Control (Offline RL)DMC Cheetah Run → Cheetah Nopaw v1 (offline)
Mean Reward454
8
Continuous Control (Offline RL)DMC Walker Walk → Walker Downhill v1 (offline)
Mean Reward435
8
Continuous Control (Offline RL)DMC Walker Walk → Walker Nofoot v1 (offline)
Mean Reward407
8
Offline Reinforcement LearningV-D4RL Walker-walk medium-replay
Normalized Return56.6
5
Offline Reinforcement LearningDMC medium-expert
WW to WD808
5
Showing 10 of 16 rows

Other info

Follow for update