Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

SUMO: Search-Based Uncertainty Estimation for Model-Based Offline Reinforcement Learning

About

The performance of offline reinforcement learning (RL) suffers from the limited size and quality of static datasets. Model-based offline RL addresses this issue by generating synthetic samples through a dynamics model to enhance overall performance. To evaluate the reliability of the generated samples, uncertainty estimation methods are often employed. However, model ensemble, the most commonly used uncertainty estimation method, is not always the best choice. In this paper, we propose a \textbf{S}earch-based \textbf{U}ncertainty estimation method for \textbf{M}odel-based \textbf{O}ffline RL (SUMO) as an alternative. SUMO characterizes the uncertainty of synthetic samples by measuring their cross entropy against the in-distribution dataset samples, and uses an efficient search-based method for implementation. In this way, SUMO can achieve trustworthy uncertainty estimation. We integrate SUMO into several model-based offline RL algorithms including MOPO and Adapted MOReL (AMOReL), and provide theoretical analysis for them. Extensive experimental results on D4RL datasets demonstrate that SUMO can provide more accurate uncertainty estimation and boost the performance of base algorithms. These indicate that SUMO could be a better uncertainty estimator for model-based offline RL when used in either reward penalty or trajectory truncation. Our code is available and will be open-source for further research and development.

Zhongjian Qiao, Jiafei Lyu, Kechen Jiao, Qi Liu, Xiu Li• 2024

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score106.6
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score107.8
115
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score27.9
77
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score109.9
72
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score34.9
70
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score84.3
59
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score76.2
59
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score94.1
58
Offline Reinforcement LearningD4RL walker2d medium-replay
Normalized Score78.2
45
Offline Reinforcement LearningD4RL hopper-random
Mean Normalized Score30.8
16
Showing 10 of 10 rows

Other info

Follow for update