Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Cross-Domain Offline Policy Adaptation via Selective Transition Correction

About

It remains a critical challenge to adapt policies across domains with mismatched dynamics in reinforcement learning (RL). In this paper, we study cross-domain offline RL, where an offline dataset from another similar source domain can be accessed to enhance policy learning upon a target domain dataset. Directly merging the two datasets may lead to suboptimal performance due to potential dynamics mismatches. Existing approaches typically mitigate this issue through source domain transition filtering or reward modification, which, however, may lead to insufficient exploitation of the valuable source domain data. Instead, we propose to modify the source domain data into the target domain data. To that end, we leverage an inverse policy model and a reward model to correct the actions and rewards of source transitions, explicitly achieving alignment with the target dynamics. Since limited data may result in inaccurate model training, we further employ a forward dynamics model to retain corrected samples that better match the target dynamics than the original transitions. Consequently, we propose the Selective Transition Correction (STC) algorithm, which enables reliable usage of source domain data for policy adaptation. Experiments on various environments with dynamics shifts demonstrate that STC achieves superior performance against existing baselines.

Mengbei Yan, Jiafei Lyu, Shengjie Sun, Zhongjian Qiao, Jingwen Yang, Zichuan Lin, Deheng Ye, Xiu Li• 2026

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement Learninghopper medium
Normalized Score43.4
52
Offline Reinforcement Learninghalfcheetah medium
Normalized Score42.4
43
Offline Reinforcement Learninghalfcheetah medium-replay
Normalized Score26.7
43
Cross-Domain Offline Policy Adaptationhopper-med Source Target
Normalized Score41.6
14
Cross-Domain Offline Policy Adaptationant-med med Source Target
Normalized Score60.6
14
Offline Policy Adaptationhopper medium-replay
Normalized Score36.8
14
Offline Policy AdaptationHopper medium-expert
Normalized Score53.4
14
Offline Policy Adaptationwalker2d medium
Normalized Score56.7
14
Offline Policy Adaptationwalker2d medium-replay
Normalized Score63.1
14
Offline Policy AdaptationWalker2d medium-expert
Normalized Score62.1
14
Showing 10 of 36 rows

Other info

Follow for update