Exclusively Penalized Q-learning for Offline Reinforcement Learning
About
Constraint-based offline reinforcement learning (RL) involves policy constraints or imposing penalties on the value function to mitigate overestimation errors caused by distributional shift. This paper focuses on a limitation in existing offline RL methods with penalized value function, indicating the potential for underestimation bias due to unnecessary bias introduced in the value function. To address this concern, we propose Exclusively Penalized Q-learning (EPQ), which reduces estimation bias in the value function by selectively penalizing states that are prone to inducing estimation errors. Numerical results show that our method significantly reduces underestimation bias and improves performance in various offline control tasks compared to other offline RL methods
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL antmaze-umaze (diverse) | Normalized Score78.3 | 40 | |
| Offline Reinforcement Learning | D4RL AntMaze | AntMaze Umaze Return99.4 | 39 | |
| Offline Reinforcement Learning | D4RL MuJoCo Hopper medium standard | Normalized Score101.3 | 36 | |
| Offline Reinforcement Learning | D4RL Adroit pen (cloned) | Normalized Return91.8 | 32 | |
| Offline Reinforcement Learning | D4RL Adroit pen (human) | Normalized Return83.9 | 32 | |
| Offline Reinforcement Learning | D4RL Adroit (expert, human) | Adroit Door Return (Human)13.2 | 29 | |
| Offline Reinforcement Learning | D4RL antmaze-med (diverse) | Normalized Score86.7 | 26 | |
| Offline Reinforcement Learning | D4RL antmaze-large (play) | Normalized Score40 | 26 | |
| Offline Reinforcement Learning | D4RL antmaze-large (diverse) | Normalized Score36.7 | 26 | |
| Offline Reinforcement Learning | MuJoCo hopper D4RL (medium-replay) | Normalized Return97.8 | 26 |