Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Efficient Planning in a Compact Latent Action Space

About

Planning-based reinforcement learning has shown strong performance in tasks in discrete and low-dimensional continuous action spaces. However, planning usually brings significant computational overhead for decision-making, and scaling such methods to high-dimensional action spaces remains challenging. To advance efficient planning for high-dimensional continuous control, we propose Trajectory Autoencoding Planner (TAP), which learns low-dimensional latent action codes with a state-conditional VQ-VAE. The decoder of the VQ-VAE thus serves as a novel dynamics model that takes latent actions and current state as input and reconstructs long-horizon trajectories. During inference time, given a starting state, TAP searches over discrete latent actions to find trajectories that have both high probability under the training distribution and high predicted cumulative reward. Empirical evaluation in the offline RL setting demonstrates low decision latency which is indifferent to the growing raw action dimensionality. For Adroit robotic hand manipulation tasks with high-dimensional continuous action space, TAP surpasses existing model-based methods by a large margin and also beats strong model-free actor-critic baselines.

Zhengyao Jiang, Tianjun Zhang, Michael Janner, Yueying Li, Tim Rockt\"aschel, Edward Grefenstette, Yuandong Tian• 2022

Related benchmarks

TaskDatasetResultRank
Offline Reinforcement Learningantmaze medium-play
Score78
35
Offline Reinforcement Learningantmaze medium-diverse
Score85
18
Offline Reinforcement Learningantmaze large-play
Score74
18
Offline Reinforcement LearningHopper Medium Noise 0
Normalized Return80.92
14
Offline Reinforcement LearningHopper Medium (Noise 5)
Normalized Return48.69
14
Offline Reinforcement LearningAntMaze-Ultra-Play
Avg Normalized Score22
10
Offline Reinforcement LearningAntMaze Ultra-Diverse
Avg Normalized Score2.60e+3
10
Offline Reinforcement LearningWalker2D Medium-Expert (Noise 12)
Normalized Return91.09
7
Offline Reinforcement LearningWalker2D Medium-Expert (Noise 0)
Normalized Return105.3
7
Offline Reinforcement LearningWalker2D Medium-Expert Noise 7
Normalized Return91.4
7
Showing 10 of 26 rows

Other info

Follow for update