IDQL: Implicit Q-Learning as an Actor-Critic Method with Diffusion Policies
About
Effective offline RL methods require properly handling out-of-distribution actions. Implicit Q-learning (IQL) addresses this by training a Q-function using only dataset actions through a modified Bellman backup. However, it is unclear which policy actually attains the values represented by this implicitly trained Q-function. In this paper, we reinterpret IQL as an actor-critic method by generalizing the critic objective and connecting it to a behavior-regularized implicit actor. This generalization shows how the induced actor balances reward maximization and divergence from the behavior policy, with the specific loss choice determining the nature of this tradeoff. Notably, this actor can exhibit complex and multimodal characteristics, suggesting issues with the conditional Gaussian actor fit with advantage weighted regression (AWR) used in prior methods. Instead, we propose using samples from a diffusion parameterized behavior policy and weights computed from the critic to then importance sampled our intended policy. We introduce Implicit Diffusion Q-learning (IDQL), combining our general IQL critic with the policy extraction method. IDQL maintains the ease of implementation of IQL while outperforming prior offline RL methods and demonstrating robustness to hyperparameters. Code is available at https://github.com/philippe-eecs/IDQL.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Offline Reinforcement Learning | D4RL halfcheetah-medium-expert | Normalized Score94.4 | 117 | |
| Offline Reinforcement Learning | D4RL hopper-medium-expert | Normalized Score105.3 | 115 | |
| Offline Reinforcement Learning | D4RL walker2d-medium-expert | Normalized Score111.6 | 86 | |
| Offline Reinforcement Learning | D4RL Medium-Replay Hopper | Normalized Score82.4 | 72 | |
| Offline Reinforcement Learning | D4RL Medium HalfCheetah | Normalized Score49.7 | 59 | |
| Offline Reinforcement Learning | D4RL Medium-Replay HalfCheetah | Normalized Score45.1 | 59 | |
| Offline Reinforcement Learning | D4RL Medium Walker2d | Normalized Score80.2 | 58 | |
| hopper locomotion | D4RL hopper medium-replay | Normalized Score99.4 | 56 | |
| Offline Reinforcement Learning | OGBench antmaze-large-navigate-singletask task1-v0 to task5-v0 | Score62 | 55 | |
| walker2d locomotion | D4RL walker2d medium-replay | Normalized Score89.1 | 53 |