Cross-Domain Policy Adaptation via Value-Guided Data Filtering
About
Generalizing policies across different domains with dynamics mismatch poses a significant challenge in reinforcement learning. For example, a robot learns the policy in a simulator, but when it is deployed in the real world, the dynamics of the environment may be different. Given the source and target domain with dynamics mismatch, we consider the online dynamics adaptation problem, in which case the agent can access sufficient source domain data while online interactions with the target domain are limited. Existing research has attempted to solve the problem from the dynamics discrepancy perspective. In this work, we reveal the limitations of these methods and explore the problem from the value difference perspective via a novel insight on the value consistency across domains. Specifically, we present the Value-Guided Data Filtering (VGDF) algorithm, which selectively shares transitions from the source domain based on the proximity of paired value targets across the two domains. Empirical results on various environments with kinematic and morphology shifts demonstrate that our method achieves superior performance compared to prior approaches.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Cross-Domain Online Policy Adaptation | ODRL | HalfCheetah (Gravity) Score43.16 | 5 | |
| Reinforcement Learning | D4RL HalfCheetah no thighs (medium) | Mean Return3.91e+3 | 4 | |
| Reinforcement Learning | D4RL Hopper broken hips (medium) | Mean Return2.79e+3 | 4 | |
| Reinforcement Learning | D4RL Hopper short feet (medium) | Mean Return3.06e+3 | 4 | |
| Reinforcement Learning | D4RL Walker no right thigh (medium) | Mean Return3.29e+3 | 4 | |
| Reinforcement Learning | D4RL HalfCheetah broken back thigh medium | Mean Return4.83e+3 | 4 | |
| Reinforcement Learning | D4RL Walker broken right thigh (medium) | Mean Return3.00e+3 | 4 |