Critic Regularized Regression
About
Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Goal Reaching | RoboKitchen (test) | Success Rate33.1 | 16 | |
| Goal Reaching | pinpad (test) | Average Success Rate32.1 | 10 | |
| Goal Reaching | maze_large (test) | Success Rate17.3 | 10 | |
| Goal Reaching | fetch_push (test) | Success Rate0.19 | 10 | |
| Offline Reinforcement Learning | RL Unplugged Atari 46 games | Median Human Normalized Score155.6 | 8 | |
| Generative Recommendation | RecSim Extreme Noisy Noise Dominated | Reward0.217 | 8 | |
| Generative Recommendation | RecSim Medium Quality Mixed Strategy | Reward0.222 | 8 |