Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

COMBO: Conservative Offline Model-Based Policy Optimization

About

Model-based algorithms, which learn a dynamics model from logged experience and perform some sort of pessimistic planning under the learned model, have emerged as a promising paradigm for offline reinforcement learning (offline RL). However, practical variants of such model-based algorithms rely on explicit uncertainty quantification for incorporating pessimism. Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable. We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-action tuples generated via rollouts under the learned model. This results in a conservative estimate of the value function for out-of-support state-action tuples, without requiring explicit uncertainty estimation. We theoretically show that our method optimizes a lower bound on the true policy value, that this bound is tighter than that of prior methods, and our approach satisfies a policy improvement guarantee in the offline setting. Through experiments, we find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods on widely studied offline RL benchmarks, including image-based tasks.

Tianhe Yu, Aviral Kumar, Rafael Rafailov, Aravind Rajeswaran, Sergey Levine, Chelsea Finn• 2021

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score90
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score111.1
115
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score103.3
86
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score7
77
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score89.5
72
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score38.8
70
Offline Reinforcement LearningD4RL hopper-random
Normalized Score17.9
62
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score54.2
59
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score55.1
59
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score55.1
58
Showing 10 of 135 rows
...

Other info

Code

Follow for update