Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Efficient Planning in a Compact Latent Action Space

About

Planning-based reinforcement learning has shown strong performance in tasks in discrete and low-dimensional continuous action spaces. However, planning usually brings significant computational overhead for decision-making, and scaling such methods to high-dimensional action spaces remains challenging. To advance efficient planning for high-dimensional continuous control, we propose Trajectory Autoencoding Planner (TAP), which learns low-dimensional latent action codes with a state-conditional VQ-VAE. The decoder of the VQ-VAE thus serves as a novel dynamics model that takes latent actions and current state as input and reconstructs long-horizon trajectories. During inference time, given a starting state, TAP searches over discrete latent actions to find trajectories that have both high probability under the training distribution and high predicted cumulative reward. Empirical evaluation in the offline RL setting demonstrates low decision latency which is indifferent to the growing raw action dimensionality. For Adroit robotic hand manipulation tasks with high-dimensional continuous action space, TAP surpasses existing model-based methods by a large margin and also beats strong model-free actor-critic baselines.

Zhengyao Jiang, Tianjun Zhang, Michael Janner, Yueying Li, Tim Rockt\"aschel, Edward Grefenstette, Yuandong Tian• 2022

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement LearningD4RL halfcheetah-medium-expert
Normalized Score91.8
155
Offline Reinforcement LearningD4RL hopper-medium-expert
Normalized Score105.5
153
Offline Reinforcement LearningD4RL walker2d-medium-expert
Normalized Score107.4
124
Offline Reinforcement LearningD4RL Medium-Replay Hopper
Normalized Score87.3
97
Offline Reinforcement LearningD4RL Medium HalfCheetah
Normalized Score45
97
Offline Reinforcement LearningD4RL Medium Walker2d
Normalized Score64.9
96
Offline Reinforcement LearningD4RL Medium-Replay HalfCheetah
Normalized Score40.8
84
Offline Reinforcement LearningD4RL Medium Hopper
Normalized Score63.4
64
Offline Reinforcement LearningD4RL Medium-Replay Walker2d
Normalized Score66.8
42
Offline Reinforcement Learningantmaze medium-play
Score78
35
Showing 10 of 35 rows

Other info

Follow for update