COMBO: Conservative Offline Model-Based Policy Optimization
About
Model-based algorithms, which learn a dynamics model from logged experience and perform some sort of pessimistic planning under the learned model, have emerged as a promising paradigm for offline reinforcement learning (offline RL). However, practical variants of such model-based algorithms rely on explicit uncertainty quantification for incorporating pessimism. Uncertainty estimation with complex models, such as deep neural networks, can be difficult and unreliable. We overcome this limitation by developing a new model-based offline RL algorithm, COMBO, that regularizes the value function on out-of-support state-action tuples generated via rollouts under the learned model. This results in a conservative estimate of the value function for out-of-support state-action tuples, without requiring explicit uncertainty estimation. We theoretically show that our method optimizes a lower bound on the true policy value, that this bound is tighter than that of prior methods, and our approach satisfies a policy improvement guarantee in the offline setting. Through experiments, we find that COMBO consistently performs as well or better as compared to prior offline model-free and model-based methods on widely studied offline RL benchmarks, including image-based tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL halfcheetah-medium-expert | Normalized Score90 | 117 | |
| Offline Reinforcement Learning | D4RL hopper-medium-expert | Normalized Score111.1 | 115 | |
| Offline Reinforcement Learning | D4RL walker2d-medium-expert | Normalized Score103.3 | 86 | |
| Offline Reinforcement Learning | D4RL walker2d-random | Normalized Score7 | 77 | |
| Offline Reinforcement Learning | D4RL Medium-Replay Hopper | Normalized Score89.5 | 72 | |
| Offline Reinforcement Learning | D4RL halfcheetah-random | Normalized Score38.8 | 70 | |
| Offline Reinforcement Learning | D4RL hopper-random | Normalized Score17.9 | 62 | |
| Offline Reinforcement Learning | D4RL Medium HalfCheetah | Normalized Score54.2 | 59 | |
| Offline Reinforcement Learning | D4RL Medium-Replay HalfCheetah | Normalized Score55.1 | 59 | |
| Offline Reinforcement Learning | D4RL halfcheetah v2 (medium-replay) | Normalized Score55.1 | 58 |