Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Confounding Robust Continuous Control via Automatic Reward Shaping

About

Reward shaping has been applied widely to accelerate Reinforcement Learning (RL) agents' training. However, a principled way of designing effective reward shaping functions, especially for complex continuous control problems, remains largely under-explained. In this work, we propose to automatically learn a reward shaping function for continuous control problems from offline datasets, potentially contaminated by unobserved confounding variables. Specifically, our method builds upon the recently proposed causal Bellman equation to learn a tight upper bound on the optimal state values, which is then used as the potentials in the Potential-Based Reward Shaping (PBRS) framework. Our proposed reward shaping algorithm is tested with Soft-Actor-Critic (SAC) on multiple commonly used continuous control benchmarks and exhibits strong performance guarantees under unobserved confounders. More broadly, our work marks a solid first step towards confounding robust continuous control from a causal perspective. Code for training our reward shaping functions can be found at https://github.com/mateojuliani/confounding_robust_cont_control.

Mateo Juliani, Mingxuan Li, Elias Bareinboim• 2026

Related benchmarks

TaskDatasetResultRank
Reinforcement LearningHopper v5
Average Return3.58e+3
93
Reinforcement LearningAnt v5
Average Return3.39e+3
49
Reinforcement LearningWalker2D v5
Average Return4.30e+3
43
Reinforcement LearningHalfcheetah v5
Average Return8.95e+3
43
Reinforcement LearningAdroitHandDoor v1
Average Return1.73e+3
12
Reinforcement LearningAdroitHandRelocate v1
Average Return30
10
Reinforcement Learning18 Confounded Environments Aggregate
Normalized Mean1.29
5
Showing 7 of 7 rows

Other info

Follow for update