Confounding Robust Continuous Control via Automatic Reward Shaping
About
Reward shaping has been applied widely to accelerate Reinforcement Learning (RL) agents' training. However, a principled way of designing effective reward shaping functions, especially for complex continuous control problems, remains largely under-explained. In this work, we propose to automatically learn a reward shaping function for continuous control problems from offline datasets, potentially contaminated by unobserved confounding variables. Specifically, our method builds upon the recently proposed causal Bellman equation to learn a tight upper bound on the optimal state values, which is then used as the potentials in the Potential-Based Reward Shaping (PBRS) framework. Our proposed reward shaping algorithm is tested with Soft-Actor-Critic (SAC) on multiple commonly used continuous control benchmarks and exhibits strong performance guarantees under unobserved confounders. More broadly, our work marks a solid first step towards confounding robust continuous control from a causal perspective. Code for training our reward shaping functions can be found at https://github.com/mateojuliani/confounding_robust_cont_control.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Reinforcement Learning | Hopper v5 | Average Return3.58e+3 | 93 | |
| Reinforcement Learning | Ant v5 | Average Return3.39e+3 | 49 | |
| Reinforcement Learning | Walker2D v5 | Average Return4.30e+3 | 43 | |
| Reinforcement Learning | Halfcheetah v5 | Average Return8.95e+3 | 43 | |
| Reinforcement Learning | AdroitHandDoor v1 | Average Return1.73e+3 | 12 | |
| Reinforcement Learning | AdroitHandRelocate v1 | Average Return30 | 10 | |
| Reinforcement Learning | 18 Confounded Environments Aggregate | Normalized Mean1.29 | 5 |