Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Mutual Information Regularized Offline Reinforcement Learning

About

The major challenge of offline RL is the distribution shift that appears when out-of-distribution actions are queried, which makes the policy improvement direction biased by extrapolation errors. Most existing methods address this problem by penalizing the policy or value for deviating from the behavior policy during policy improvement or evaluation. In this work, we propose a novel MISA framework to approach offline RL from the perspective of Mutual Information between States and Actions in the dataset by directly constraining the policy improvement direction. MISA constructs lower bounds of mutual information parameterized by the policy and Q-values. We show that optimizing this lower bound is equivalent to maximizing the likelihood of a one-step improved policy on the offline dataset. Hence, we constrain the policy improvement direction to lie in the data manifold. The resulting algorithm simultaneously augments the policy evaluation and improvement by adding mutual information regularizations. MISA is a general framework that unifies conservative Q-learning (CQL) and behavior regularization methods (e.g., TD3+BC) as special cases. We introduce 3 different variants of MISA, and empirically demonstrate that tighter mutual information lower bound gives better offline RL performance. In addition, our extensive experiments show MISA significantly outperforms a wide range of baselines on various tasks of the D4RL benchmark,e.g., achieving 742.9 total points on gym-locomotion tasks. Our code is available at https://github.com/sail-sg/MISA.

Xiao Ma, Bingyi Kang, Zhongwen Xu, Min Lin, Shuicheng Yan• 2022

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL AntMaze
AntMaze Umaze Return92.3
65
Offline Reinforcement LearningD4RL Franka Kitchen
Mixed Success Rate56.6
34
Offline Reinforcement LearningD4RL Adroit (expert, human)
Adroit Door Return (Human)5.2
29
Offline Reinforcement LearningD4RL Gym v2 (test)
Score (HalfCheetah, Medium-Expert)94.7
20
Offline Reinforcement Learning (Locomotion)D4RL MuJoCo
Return (HalfCheetah, Random)2.5
10
Offline Reinforcement LearningD4RL Adroit human, cloned v0
Adroit v0 Aggregate Score162.7
9
Offline Reinforcement LearningD4RL Antmaze umaze, medium, large v0
AntMaze UMaze v0 Score92.3
9
Showing 7 of 7 rows

Other info

Code

Follow for update