Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

State Regularized Policy Optimization on Data with Dynamics Shift

About

In many real-world scenarios, Reinforcement Learning (RL) algorithms are trained on data with dynamics shift, i.e., with different underlying environment dynamics. A majority of current methods address such issue by training context encoders to identify environment parameters. Data with dynamics shift are separated according to their environment parameters to train the corresponding policy. However, these methods can be sample inefficient as data are used \textit{ad hoc}, and policies trained for one dynamics cannot benefit from data collected in all other environments with different dynamics. In this paper, we find that in many environments with similar structures and different dynamics, optimal policies have similar stationary state distributions. We exploit such property and learn the stationary state distribution from data with dynamics shift for efficient data reuse. Such distribution is used to regularize the policy trained in a new environment, leading to the SRPO (\textbf{S}tate \textbf{R}egularized \textbf{P}olicy \textbf{O}ptimization) algorithm. To conduct theoretical analyses, the intuition of similar environment structures is characterized by the notion of homomorphous MDPs. We then demonstrate a lower-bound performance guarantee on policies regularized by the stationary state distribution. In practice, SRPO can be an add-on module to context-based algorithms in both online and offline RL settings. Experimental results show that SRPO can make several context-based algorithms far more data efficient and significantly improve their overall performance.

Zhenghai Xue, Qingpeng Cai, Shuchang Liu, Dong Zheng, Peng Jiang, Kun Gai, Bo An• 2023

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement Learninghopper medium
Normalized Score12.4
52
Offline Reinforcement Learninghalfcheetah medium
Normalized Score36.9
43
Offline Reinforcement Learninghalfcheetah medium-replay
Normalized Score17.5
43
Offline Policy AdaptationHalfcheetah medium-expert
Normalized Score42.5
14
Offline Policy AdaptationWalker2d medium-expert
Normalized Score46.4
14
Offline Policy Adaptationant medium
Normalized Score72.8
14
Offline Policy Adaptationwalker2d medium
Normalized Score38.6
14
Offline Policy Adaptationwalker2d medium-replay
Normalized Score36
14
Offline Policy Adaptationant medium-expert
Normalized Score68.5
14
Cross-Domain Offline Policy Adaptationhopper-med Source Target
Normalized Score26.5
14
Showing 10 of 35 rows

Other info

Follow for update