Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Latent Exploration for Reinforcement Learning

About

In Reinforcement Learning, agents learn policies by exploring and interacting with the environment. Due to the curse of dimensionality, learning policies that map high-dimensional sensory input to motor output is particularly challenging. During training, state of the art methods (SAC, PPO, etc.) explore the environment by perturbing the actuation with independent Gaussian noise. While this unstructured exploration has proven successful in numerous tasks, it can be suboptimal for overactuated systems. When multiple actuators, such as motors or muscles, drive behavior, uncorrelated perturbations risk diminishing each other's effect, or modifying the behavior in a task-irrelevant way. While solutions to introduce time correlation across action perturbations exist, introducing correlation across actuators has been largely ignored. Here, we propose LATent TIme-Correlated Exploration (Lattice), a method to inject temporally-correlated noise into the latent state of the policy network, which can be seamlessly integrated with on- and off-policy algorithms. We demonstrate that the noisy actions generated by perturbing the network's activations can be modeled as a multivariate Gaussian distribution with a full covariance matrix. In the PyBullet locomotion tasks, Lattice-SAC achieves state of the art results, and reaches 18% higher reward than unstructured exploration in the Humanoid environment. In the musculoskeletal control environments of MyoSuite, Lattice-PPO achieves higher reward in most reaching and object manipulation tasks, while also finding more energy-efficient policies with reductions of 20-60%. Overall, we demonstrate the effectiveness of structured action noise in time and actuator space for complex motor control tasks. The code is available at: https://github.com/amathislab/lattice.

Alberto Silvio Chiappa, Alessandro Marin Vargas, Ann Zixiang Huang, Alexander Mathis• 2023

Related benchmarks

TaskDatasetResultRank
LocomotionPyBullet Hopper
Energy0.22
8
LocomotionPyBullet Humanoid
Energy Consumption0.11
8
LocomotionPyBullet Half cheetah
Energy Consumption0.25
8
LocomotionPyBullet Ant
Energy Consumption0.24
8
LocomotionPyBullet Walker
Energy Consumption0.28
8
BaodingMyoSuite (test)
Energy0.04
4
Finger reachMyoSuite Finger reach (N=5 seeds)
Energy0.04
4
Hand poseMyoSuite Hand pose N=5 seeds
Energy0.03
4
Hand reachMyoSuite (test)
Energy0.04
4
PenMyoSuite (test)
Energy0.04
4
Showing 10 of 13 rows

Other info

Code

Follow for update