When Data Geometry Meets Deep Function: Generalizing Offline Reinforcement Learning
About
In offline reinforcement learning (RL), one detrimental issue to policy learning is the error accumulation of deep Q function in out-of-distribution (OOD) areas. Unfortunately, existing offline RL methods are often over-conservative, inevitably hurting generalization performance outside data distribution. In our study, one interesting observation is that deep Q functions approximate well inside the convex hull of training data. Inspired by this, we propose a new method, DOGE (Distance-sensitive Offline RL with better GEneralization). DOGE marries dataset geometry with deep function approximators in offline RL, and enables exploitation in generalizable OOD areas rather than strictly constraining policy within data distribution. Specifically, DOGE trains a state-conditioned distance function that can be readily plugged into standard actor-critic methods as a policy constraint. Simple yet elegant, our algorithm enjoys better generalization compared to state-of-the-art methods on D4RL benchmarks. Theoretical analysis demonstrates the superiority of our approach to existing methods that are solely based on data distribution or support constraints.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Hand Manipulation | Adroit door-human | Normalized Avg Score-0.8 | 33 | |
| Offline Reinforcement Learning | D4RL MuJoCo Walker2d-mr v2 (medium-replay) | Average Normalized Score87.3 | 29 | |
| Offline Reinforcement Learning | D4RL MuJoCo Hopper-mr v2 (medium-replay) | Avg Normalized Score76.2 | 29 | |
| Offline Reinforcement Learning | D4RL MuJoCo Hopper-m v2 (medium) | Avg Normalized Score98.6 | 24 | |
| Offline Reinforcement Learning | D4RL MuJoCo Walker2d medium-expert v2 | Average Normalized Score110.4 | 24 | |
| Offline Reinforcement Learning | D4RL MuJoCo Halfcheetah-mr v2 (medium-replay) | Avg Normalized Score42.8 | 24 | |
| Hand Manipulation | Adroit door-cloned | Normalized Score0.00e+0 | 23 | |
| Offline Reinforcement Learning | D4RL Mujoco Hopper-Medium-Expert v2 | Normalized Score102.7 | 22 | |
| Offline Reinforcement Learning | D4RL AntMaze v2 (various) | UMaze Success Rate78.9 | 20 | |
| Pen | Adroit Pen v0 (Cloned) | Normalized Score3.35e+3 | 19 |