Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Learning Domain Invariant Representations in Goal-conditioned Block MDPs

About

Deep Reinforcement Learning (RL) is successful in solving many complex Markov Decision Processes (MDPs) problems. However, agents often face unanticipated environmental changes after deployment in the real world. These changes are often spurious and unrelated to the underlying problem, such as background shifts for visual input agents. Unfortunately, deep RL policies are usually sensitive to these changes and fail to act robustly against them. This resembles the problem of domain generalization in supervised learning. In this work, we study this problem for goal-conditioned RL agents. We propose a theoretical framework in the Block MDP setting that characterizes the generalizability of goal-conditioned policies to new environments. Under this framework, we develop a practical method PA-SkewFit that enhances domain generalization. The empirical evaluation shows that our goal-conditioned RL agent can perform well in various unseen test environments, improving by 50% over baselines.

Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael R. Zhang, Jimmy Ba• 2021

Related benchmarks

TaskDatasetResultRank
DoorMultiworld (test)
Angle Difference0.106
5
PickupMultiworld (test)
Object Distance0.028
5
PushMultiworld (test)
Puck Distance0.069
5
ReachMultiworld (test)
Hand Distance0.076
5
PickupMultiworld (train)
Object Distance0.02
2
DoorMultiworld (train)
Angle Difference0.058
2
PushMultiworld (train)
Puck Distance0.06
2
ReachMultiworld (train)
Hand Distance0.067
2
Showing 8 of 8 rows

Other info

Follow for update