Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

MOBODY: Model Based Off-Dynamics Offline Reinforcement Learning

About

We study off-dynamics offline reinforcement learning, where the goal is to learn a policy from offline source and limited target datasets with mismatched dynamics. Existing methods either penalize the reward or discard source transitions occurring in parts of the transition space with high dynamics shift. As a result, they optimize the policy using data from low-shift regions, limiting exploration of high-reward states in the target domain that do not fall within these regions. Consequently, such methods often fail when the dynamics shift is significant or the optimal trajectories lie outside the low-shift regions. To overcome this limitation, we propose MOBODY, a Model-Based Off-Dynamics Offline RL algorithm that optimizes a policy using learned target dynamics transitions to explore the target domain, rather than only being trained with the low dynamics-shift transitions. For the dynamics learning, built on the observation that achieving the same next state requires taking different actions in different domains, MOBODY employs separate action encoders for each domain to encode different actions to the shared latent space while sharing a unified representation of states and a common transition function. We further introduce a target Q-weighted behavior cloning loss in policy optimization to avoid out-of-distribution actions, which push the policy toward actions with high target-domain Q-values, rather than high source domain Q-values or uniformly imitating all actions in the offline dataset. We evaluate MOBODY on a wide range of MuJoCo and Adroit benchmarks, demonstrating that it outperforms state-of-the-art off-dynamics RL baselines as well as policy learning methods based on different dynamics learning baselines, with especially pronounced improvements in challenging scenarios where existing methods struggle.

Yihong Guo, Yu Yang, Pan Xu, Anqi Liu• 2025

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement Learninghopper medium
Normalized Score13.05
58
Robot Hand ManipulationAdroit Pen medium, hard v0
Normalized Score37.8
24
Robot Hand ManipulationAdroit Door medium hard v0
Normalized Score63.67
24
Offline Reinforcement Learningant-kine-m
Normalized Score74.92
10
Offline Reinforcement LearningHalfCheetah morph-thigh Medium
Normalized Score27.18
6
Offline Reinforcement LearningHalfCheetah morph-thigh Hard
Normalized Score28.51
6
Offline Reinforcement LearningHalfCheetah morph-torso Medium
Normalized Score23.92
6
Offline Reinforcement LearningHalfCheetah morph-torso (Hard)
Normalized Score40.45
6
Offline Reinforcement LearningHalfCheetah kin-thighjnt Medium
Normalized Score59.17
6
Offline Reinforcement LearningHalfCheetah kin-thighjnt Hard
Normalized Score56.72
6
Showing 10 of 34 rows

Other info

Follow for update