Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble

About

Offline reinforcement learning (offline RL), which aims to find an optimal policy from a previously collected static dataset, bears algorithmic difficulties due to function approximation errors from out-of-distribution (OOD) data points. To this end, offline RL algorithms adopt either a constraint or a penalty term that explicitly guides the policy to stay close to the given dataset. However, prior methods typically require accurate estimation of the behavior policy or sampling from OOD data points, which themselves can be a non-trivial problem. Moreover, these methods under-utilize the generalization ability of deep neural networks and often fall into suboptimal solutions too close to the given dataset. In this work, we propose an uncertainty-based offline RL method that takes into account the confidence of the Q-value prediction and does not require any estimation or sampling of the data distribution. We show that the clipped Q-learning, a technique widely used in online RL, can be leveraged to successfully penalize OOD data points with high prediction uncertainties. Surprisingly, we find that it is possible to substantially outperform existing offline RL methods on various tasks by simply increasing the number of Q-networks along with the clipped Q-learning. Based on this observation, we propose an ensemble-diversified actor-critic algorithm that reduces the number of required ensemble networks down to a tenth compared to the naive ensemble while achieving state-of-the-art performance on most of the D4RL benchmarks considered.

Gaon An, Seungyong Moon, Jang-Hyun Kim, Hyun Oh Song• 2021

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score106.3
117
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score21.7
77
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score28.4
70
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return92.5
67
Offline Reinforcement LearningD4RL hopper-random
Normalized Score31.3
62
Offline Reinforcement LearningKitchen Partial
Normalized Score33.8
62
Offline Reinforcement LearningD4RL halfcheetah v2 (medium-replay)
Normalized Score61.3
58
Offline Reinforcement LearningD4RL halfcheetah-expert v2
Normalized Score106.8
56
Offline Reinforcement LearningD4RL walker2d-expert v2
Normalized Score115.1
56
Offline Reinforcement LearningD4RL hopper-expert v2
Normalized Score110.1
56
Showing 10 of 122 rows
...

Other info

Follow for update