Conservative Q-Learning for Offline Reinforcement Learning
About
Effectively leveraging large, previously collected datasets in reinforcement learning (RL) is a key challenge for large-scale real-world applications. Offline RL algorithms promise to learn effective policies from previously-collected, static datasets without further interaction. However, in practice, offline RL presents a major challenge, and standard off-policy RL methods can fail due to overestimation of values induced by the distributional shift between the dataset and the learned policy, especially when training on complex and multi-modal data distributions. In this paper, we propose conservative Q-learning (CQL), which aims to address these limitations by learning a conservative Q-function such that the expected value of a policy under this Q-function lower-bounds its true value. We theoretically show that CQL produces a lower bound on the value of the current policy and that it can be incorporated into a policy learning procedure with theoretical improvement guarantees. In practice, CQL augments the standard Bellman error objective with a simple Q-value regularizer which is straightforward to implement on top of existing deep Q-learning and actor-critic implementations. On both discrete and continuous control domains, we show that CQL substantially outperforms existing offline RL methods, often learning policies that attain 2-5 times higher final return, especially when learning from complex and multi-modal data distributions.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL halfcheetah-medium-expert | Normalized Score95 | 117 | |
| Offline Reinforcement Learning | D4RL hopper-medium-expert | Normalized Score111.9 | 115 | |
| Reinforcement Learning | Hopper v5 | Average Return1.78e+3 | 93 | |
| Auto-bidding | AuctionNet | Score363.2 | 90 | |
| Offline Reinforcement Learning | D4RL walker2d-medium-expert | Normalized Score109.4 | 86 | |
| Offline Reinforcement Learning | D4RL walker2d-random | Normalized Score270 | 77 | |
| Offline Reinforcement Learning | D4RL Medium-Replay Hopper | Normalized Score95 | 72 | |
| Offline Reinforcement Learning | D4RL halfcheetah-random | Normalized Score35.4 | 70 | |
| Offline Reinforcement Learning | D4RL Walker2d Medium v2 | Normalized Return79.5 | 67 | |
| Offline Reinforcement Learning | D4RL hopper-random | Normalized Score53.6 | 62 |