Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Offline Reinforcement Learning with Implicit Q-Learning

About

Offline reinforcement learning requires reconciling two conflicting aims: learning a policy that improves over the behavior policy that collected the dataset, while at the same time minimizing the deviation from the behavior policy so as to avoid errors due to distributional shift. This trade-off is critical, because most current offline reinforcement learning methods need to query the value of unseen actions during training to improve the policy, and therefore need to either constrain these actions to be in-distribution, or else regularize their values. We propose an offline RL method that never needs to evaluate actions outside of the dataset, but still enables the learned policy to improve substantially over the best behavior in the data through generalization. The main insight in our work is that, instead of evaluating unseen actions from the latest policy, we can approximate the policy improvement step implicitly by treating the state value function as a random variable, with randomness determined by the action (while still integrating over the dynamics to avoid excessive optimism), and then taking a state conditional upper expectile of this random variable to estimate the value of the best actions in that state. This leverages the generalization capacity of the function approximator to estimate the value of the best available action at a given state without ever directly querying a Q-function with this unseen action. Our algorithm alternates between fitting this upper expectile value function and backing it up into a Q-function. Then, we extract the policy via advantage-weighted behavioral cloning. We dub our method implicit Q-learning (IQL). IQL demonstrates the state-of-the-art performance on D4RL, a standard benchmark for offline reinforcement learning. We also demonstrate that IQL achieves strong performance fine-tuning using online interaction after offline initialization.

Ilya Kostrikov, Ashvin Nair, Sergey Levine• 2021

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score94.7
117
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score107.4
115
Auto-biddingAuctionNet
Score384.6
90
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score111.7
86
Offline Reinforcement LearningD4RL walker2d-random
Normalized Score5.8
77
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score97.4
72
Offline Reinforcement LearningD4RL halfcheetah-random
Normalized Score22.4
70
Offline Reinforcement LearningD4RL Walker2d Medium v2
Normalized Return81.8
67
Offline Reinforcement LearningKitchen Partial
Normalized Score59.7
62
Offline Reinforcement LearningD4RL hopper-random
Normalized Score10.8
62
Showing 10 of 576 rows
...

Other info

Follow for update