Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Generating Adjacency-Constrained Subgoals in Hierarchical Reinforcement Learning

About

Goal-conditioned hierarchical reinforcement learning (HRL) is a promising approach for scaling up reinforcement learning (RL) techniques. However, it often suffers from training inefficiency as the action space of the high-level, i.e., the goal space, is often large. Searching in a large goal space poses difficulties for both high-level subgoal generation and low-level policy learning. In this paper, we show that this problem can be effectively alleviated by restricting the high-level action space from the whole goal space to a $k$-step adjacent region of the current state using an adjacency constraint. We theoretically prove that the proposed adjacency constraint preserves the optimal hierarchical policy in deterministic MDPs, and show that this constraint can be practically implemented by training an adjacency network that can discriminate between adjacent and non-adjacent subgoals. Experimental results on discrete and continuous control tasks show that incorporating the adjacency constraint improves the performance of state-of-the-art HRL approaches in both deterministic and stochastic environments.

Tianren Zhang, Shangqi Guo, Tian Tan, Xiaolin Hu, Feng Chen• 2020

Related benchmarks

TaskDatasetResultRank
NavigationPointMaze
Success Rate9.37e+3
21
NavigationAntMaze Small
Success Rate8.84e+3
16
NavigationAntMaze
Success Rate6.87e+3
16
NavigationBottleneck
Success Rate0.00e+0
16
NavigationComplex
Success Rate0.00e+0
16
ReachingReacher 3D
Success Rate67.1
10
Obstacle AvoidanceUR3Obstacle
Success Rate0.5
8
Showing 7 of 7 rows

Other info

Follow for update