Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MOReL : Model-Based Offline Reinforcement Learning

About

In offline reinforcement learning (RL), the goal is to learn a highly rewarding policy based solely on a dataset of historical interactions with the environment. The ability to train RL policies offline can greatly expand the applicability of RL, its data efficiency, and its experimental velocity. Prior work in offline RL has been confined almost exclusively to model-free RL approaches. In this work, we present MOReL, an algorithmic framework for model-based offline RL. This framework consists of two steps: (a) learning a pessimistic MDP (P-MDP) using the offline dataset; and (b) learning a near-optimal policy in this P-MDP. The learned P-MDP has the property that for any policy, the performance in the real environment is approximately lower-bounded by the performance in the P-MDP. This enables it to serve as a good surrogate for purposes of policy evaluation and learning, and overcome common pitfalls of model-based RL like model exploitation. Theoretically, we show that MOReL is minimax optimal (up to log factors) for offline RL. Through experiments, we show that MOReL matches or exceeds state-of-the-art results in widely studied offline RL benchmarks. Moreover, the modular design of MOReL enables future advances in its components (e.g. generative modeling, uncertainty estimation, planning etc.) to directly translate into advances for offline RL.

Rahul Kidambi, Aravind Rajeswaran, Praneeth Netrapalli, Thorsten Joachims• 2020

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score95.6
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score108.7
115
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score95.6
86
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score37.3
77
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score93.6
72
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score38.9
70
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return77.8
67
Offline Reinforcement LearningD4RL hopper-random
Normalized Score53.6
62
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score40.2
59
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score42.1
59
Showing 10 of 93 rows
...

Other info

Follow for update