Model-based Offline Reinforcement Learning with Lower Expectile Q-Learning
About
Model-based offline reinforcement learning (RL) is a compelling approach that addresses the challenge of learning from limited, static data by generating imaginary trajectories using learned models. However, these approaches often struggle with inaccurate value estimation from model rollouts. In this paper, we introduce a novel model-based offline RL method, Lower Expectile Q-learning (LEQ), which provides a low-bias model-based value estimation via lower expectile regression of $\lambda$-returns. Our empirical results show that LEQ significantly outperforms previous model-based offline RL methods on long-horizon tasks, such as the D4RL AntMaze tasks, matching or surpassing the performance of model-free approaches and sequence modeling approaches. Furthermore, LEQ matches the performance of state-of-the-art model-based and model-free methods in dense-reward environments across both state-based tasks (NeoRL and D4RL) and pixel-based tasks (V-D4RL), showing that LEQ works robustly across diverse domains. Our ablation studies demonstrate that lower expectile regression, $\lambda$-returns, and critic training on offline data are all crucial for LEQ.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL halfcheetah-medium-expert | Normalized Score102.8 | 117 | |
| Offline Reinforcement Learning | D4RL hopper-medium-expert | Normalized Score109.4 | 115 | |
| Offline Reinforcement Learning | D4RL walker2d-random | Normalized Score21.5 | 77 | |
| Offline Reinforcement Learning | D4RL Medium-Replay Hopper | Normalized Score103.9 | 72 | |
| Offline Reinforcement Learning | D4RL halfcheetah-random | Normalized Score30.8 | 70 | |
| Offline Reinforcement Learning | D4RL Medium HalfCheetah | Normalized Score71.7 | 59 | |
| Offline Reinforcement Learning | D4RL Medium-Replay HalfCheetah | Normalized Score65.5 | 59 | |
| Offline Reinforcement Learning | D4RL Medium Walker2d | Normalized Score74.9 | 58 | |
| Offline Reinforcement Learning | D4RL walker2d medium-replay | Normalized Score98.7 | 45 | |
| Offline Reinforcement Learning | puzzle-4x4-play OGBench 5 tasks v0 | Average Success Rate0.00e+0 | 18 |