Off-Dynamics Reinforcement Learning: Training for Transfer with Domain Classifiers
About
We propose a simple, practical, and intuitive approach for domain adaptation in reinforcement learning. Our approach stems from the idea that the agent's experience in the source domain should look similar to its experience in the target domain. Building off of a probabilistic view of RL, we formally show that we can achieve this goal by compensating for the difference in dynamics by modifying the reward function. This modified reward function is simple to estimate by learning auxiliary classifiers that distinguish source-domain transitions from target-domain transitions. Intuitively, the modified reward function penalizes the agent for visiting states and taking actions in the source domain which are not possible in the target domain. Said another way, the agent is penalized for transitions that would indicate that the agent is interacting with the source domain, rather than the target domain. Our approach is applicable to domains with continuous states and actions and does not require learning an explicit model of the dynamics. On discrete and continuous control tasks, we illustrate the mechanics of our approach and demonstrate its scalability to high-dimensional tasks.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | Antmaze Medium play offline (target domain) | Target Domain Score (Normalized)288.4 | 42 | |
| Locomotion | D4RL Ant medium-offline | Normalized Score75.03 | 36 | |
| Locomotion | D4RL Walker2d medium-offline | Normalized Score19.79 | 36 | |
| Locomotion | D4RL Hopper medium-offline | Score14.07 | 36 | |
| Locomotion | D4RL HalfCheetah medium-offline | Normalized Score19.86 | 36 | |
| Offline Reinforcement Learning | Adroit Pen (target-domain) | Normalized Target-Domain Score46.17 | 24 | |
| Offline Reinforcement Learning | Adroit Door (target-domain) | Target Domain Score58.91 | 24 | |
| Reinforcement Learning | MuJoCo Half-Cheetah | Average Return7.00e+3 | 18 | |
| Offline Reinforcement Learning | ODRL HalfCheetah Friction (medium) | Score (L0.1)26.39 | 6 | |
| Offline Reinforcement Learning | ODRL Ant Friction (medium) | Normalized Score (Level 0.1)55.56 | 6 |