Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Any-step Dynamics Model Improves Future Predictions for Online and Offline Reinforcement Learning

About

Model-based methods in reinforcement learning offer a promising approach to enhance data efficiency by facilitating policy exploration within a dynamics model. However, accurately predicting sequential steps in the dynamics model remains a challenge due to the bootstrapping prediction, which attributes the next state to the prediction of the current state. This leads to accumulated errors during model roll-out. In this paper, we propose the Any-step Dynamics Model (ADM) to mitigate the compounding error by reducing bootstrapping prediction to direct prediction. ADM allows for the use of variable-length plans as inputs for predicting future states without frequent bootstrapping. We design two algorithms, ADMPO-ON and ADMPO-OFF, which apply ADM in online and offline model-based frameworks, respectively. In the online setting, ADMPO-ON demonstrates improved sample efficiency compared to previous state-of-the-art methods. In the offline setting, ADMPO-OFF not only demonstrates superior performance compared to recent state-of-the-art offline approaches but also offers better quantification of model uncertainty using only a single ADM.

Haoxin Lin, Yu-Yan Xu, Yihao Sun, Zhilong Zhang, Yi-Chen Li, Chengxing Jia, Junyin Ye, Jiaji Zhang, Yang Yu• 2024

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score103.7
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score112.7
115
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score22.2
77
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score104.4
72
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score45.4
70
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score72.2
59
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score67.6
59
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score93.2
58
Offline Reinforcement LearningD4RL walker2d medium-replay
Normalized Score95.6
45
Offline Reinforcement LearningD4RL MuJoCo Hopper-mr v2 (medium-replay)
Avg Normalized Score104.4
29
Showing 10 of 37 rows

Other info

Follow for update