Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Critic Regularized Regression

About

Offline reinforcement learning (RL), also known as batch RL, offers the prospect of policy optimization from large pre-recorded datasets without online environment interaction. It addresses challenges with regard to the cost of data collection and safety, both of which are particularly pertinent to real-world applications of RL. Unfortunately, most off-policy algorithms perform poorly when learning from a fixed dataset. In this paper, we propose a novel offline RL algorithm to learn policies from data using a form of critic-regularized regression (CRR). We find that CRR performs surprisingly well and scales to tasks with high-dimensional state and action spaces -- outperforming several state-of-the-art offline RL algorithms by a significant margin on a wide range of benchmark tasks.

Ziyu Wang, Alexander Novikov, Konrad Zolna, Jost Tobias Springenberg, Scott Reed, Bobak Shahriari, Noah Siegel, Josh Merel, Caglar Gulcehre, Nicolas Heess, Nando de Freitas• 2020

Related benchmarks

TaskDatasetResultRank
Goal ReachingRoboKitchen (test)
Success Rate33.1
16
Goal Reachingpinpad (test)
Average Success Rate32.1
10
Goal Reachingmaze_large (test)
Success Rate17.3
10
Goal Reachingfetch_push (test)
Success Rate0.19
10
Offline Reinforcement LearningRL Unplugged Atari 46 games
Median Human Normalized Score155.6
8
Generative RecommendationRecSim Extreme Noisy Noise Dominated
Reward0.217
8
Generative RecommendationRecSim Medium Quality Mixed Strategy
Reward0.222
8
Showing 7 of 7 rows

Other info

Follow for update