Cross-Domain Offline Policy Adaptation via Selective Transition Correction
About
It remains a critical challenge to adapt policies across domains with mismatched dynamics in reinforcement learning (RL). In this paper, we study cross-domain offline RL, where an offline dataset from another similar source domain can be accessed to enhance policy learning upon a target domain dataset. Directly merging the two datasets may lead to suboptimal performance due to potential dynamics mismatches. Existing approaches typically mitigate this issue through source domain transition filtering or reward modification, which, however, may lead to insufficient exploitation of the valuable source domain data. Instead, we propose to modify the source domain data into the target domain data. To that end, we leverage an inverse policy model and a reward model to correct the actions and rewards of source transitions, explicitly achieving alignment with the target dynamics. Since limited data may result in inaccurate model training, we further employ a forward dynamics model to retain corrected samples that better match the target dynamics than the original transitions. Consequently, we propose the Selective Transition Correction (STC) algorithm, which enables reliable usage of source domain data for policy adaptation. Experiments on various environments with dynamics shifts demonstrate that STC achieves superior performance against existing baselines.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | hopper medium | Normalized Score43.4 | 52 | |
| Offline Reinforcement Learning | halfcheetah medium | Normalized Score42.4 | 43 | |
| Offline Reinforcement Learning | halfcheetah medium-replay | Normalized Score26.7 | 43 | |
| Cross-Domain Offline Policy Adaptation | hopper-med Source Target | Normalized Score41.6 | 14 | |
| Cross-Domain Offline Policy Adaptation | ant-med med Source Target | Normalized Score60.6 | 14 | |
| Offline Policy Adaptation | hopper medium-replay | Normalized Score36.8 | 14 | |
| Offline Policy Adaptation | Hopper medium-expert | Normalized Score53.4 | 14 | |
| Offline Policy Adaptation | walker2d medium | Normalized Score56.7 | 14 | |
| Offline Policy Adaptation | walker2d medium-replay | Normalized Score63.1 | 14 | |
| Offline Policy Adaptation | Walker2d medium-expert | Normalized Score62.1 | 14 |