Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

MOPO: Model-based Offline Policy Optimization

About

Offline reinforcement learning (RL) refers to the problem of learning policies entirely from a large batch of previously collected data. This problem setting offers the promise of utilizing such datasets to acquire policies without any costly or dangerous active exploration. However, it is also challenging, due to the distributional shift between the offline training data and those states visited by the learned policy. Despite significant recent progress, the most successful prior methods are model-free and constrain the policy to the support of data, precluding generalization to unseen states. In this paper, we first observe that an existing model-based RL algorithm already produces significant gains in the offline setting compared to model-free approaches. However, standard model-based RL methods, designed for the online setting, do not provide an explicit mechanism to avoid the offline setting's distributional shift issue. Instead, we propose to modify the existing model-based RL methods by applying them with rewards artificially penalized by the uncertainty of the dynamics. We theoretically show that the algorithm maximizes a lower bound of the policy's return under the true MDP. We also characterize the trade-off between the gain and risk of leaving the support of the batch data. Our algorithm, Model-based Offline Policy Optimization (MOPO), outperforms standard model-based RL algorithms and prior state-of-the-art model-free offline RL algorithms on existing offline RL benchmarks and two challenging continuous control tasks that require generalizing from data collected for a different task. The code is available at https://github.com/tianheyu927/mopo.

Tianhe Yu, Garrett Thomas, Lantao Yu, Stefano Ermon, James Zou, Sergey Levine, Chelsea Finn, Tengyu Ma• 2020

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score72.7
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score23.7
115
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score44.6
86
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score4.2
77
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score35.4
70
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return41.2
67
Offline Reinforcement LearningD4RL hopper-random
Normalized Score11.7
62
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score69.2
58
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score17.8
58
Offline Reinforcement LearningD4RL halfcheetah-expert v2
Normalized Score81.3
56
Showing 10 of 246 rows
...

Other info

Follow for update