Harnessing Mixed Offline Reinforcement Learning Datasets via Trajectory Weighting
About
Most offline reinforcement learning (RL) algorithms return a target policy maximizing a trade-off between (1) the expected performance gain over the behavior policy that collected the dataset, and (2) the risk stemming from the out-of-distribution-ness of the induced state-action occupancy. It follows that the performance of the target policy is strongly related to the performance of the behavior policy and, thus, the trajectory return distribution of the dataset. We show that in mixed datasets consisting of mostly low-return trajectories and minor high-return trajectories, state-of-the-art offline RL algorithms are overly restrained by low-return trajectories and fail to exploit high-performing trajectories to the fullest. To overcome this issue, we show that, in deterministic MDPs with stochastic initial states, the dataset sampling can be re-weighted to induce an artificial dataset whose behavior policy has a higher return. This re-weighted sampling strategy may be combined with any offline RL algorithm. We further analyze that the opportunity for performance improvement over the behavior policy correlates with the positive-sided variance of the returns of the trajectories in the dataset. We empirically show that while CQL, IQL, and TD3+BC achieve only a part of this potential policy improvement, these same algorithms combined with our reweighted sampling strategy fully exploit the dataset. Furthermore, we empirically demonstrate that, despite its theoretical limitation, the approach may still be efficient in stochastic environments. The code is available at https://github.com/Improbable-AI/harness-offline-rl.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL Kitchen kitchen-partial v0 (test) | Normalized Score36 | 18 | |
| Offline Reinforcement Learning | D4RL Kitchen-mixed v0 (test) | Normalized Score50.5 | 18 | |
| Locomotion | D4RL Gym (medium) | HalfCheetah Score49 | 12 | |
| Locomotion | D4RL Gym Aggregate | Gym Total1.38e+3 | 12 | |
| Locomotion | D4RL Gym (medium-replay) | HalfCheetah Return47 | 12 | |
| Locomotion | D4RL Gym random-medium-expert | HalfCheetah Return76.8 | 12 | |
| Locomotion | D4RL Gym (random-expert) | HalfCheetah Score80.7 | 12 | |
| Offline Reinforcement Learning | D4RL AntMaze large-diverse v2 (test) | Normalized Score40 | 12 | |
| Locomotion | D4RL Gym (random-medium) | HalfCheetah Score46.5 | 12 | |
| Offline Reinforcement Learning | D4RL AntMaze umaze-diverse v2 (test) | Score (Normalized)54 | 12 |