Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Policy Optimization for Continuous Reinforcement Learning

About

We study reinforcement learning (RL) in the setting of continuous time and space, for an infinite horizon with a discounted objective and the underlying dynamics driven by a stochastic differential equation. Built upon recent advances in the continuous approach to RL, we develop a notion of occupation time (specifically for a discounted objective), and show how it can be effectively used to derive performance-difference and local-approximation formulas. We further extend these results to illustrate their applications in the PG (policy gradient) and TRPO/PPO (trust region policy optimization/ proximal policy optimization) methods, which have been familiar and powerful tools in the discrete RL setting but under-developed in continuous RL. Through numerical experiments, we demonstrate the effectiveness and advantages of our approach.

Hanyang Zhao, Wenpin Tang, David D. Yao• 2023

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningWalker
Average Returns51.77
38
QuadrupedQuadruped
Return160.2
33
Reinforcement LearningHumanoid
Zero-Shot Reward1.16
30
Reinforcement LearningTrading
Return23.46
24
Reinforcement Learningcheetah
Return174.5
24
Showing 5 of 5 rows

Other info

Follow for update