Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Reinforcement Learning with Deep Energy-Based Policies

About

We propose a method for learning expressive energy-based policies for continuous states and actions, which has been feasible only in tabular domains before. We apply our method to learning maximum entropy policies, resulting into a new algorithm, called soft Q-learning, that expresses the optimal policy via a Boltzmann distribution. We use the recently proposed amortized Stein variational gradient descent to learn a stochastic sampling network that approximates samples from this distribution. The benefits of the proposed algorithm include improved exploration and compositionality that allows transferring skills between tasks, which we confirm in simulated experiments with swimming and walking robots. We also draw a connection to actor-critic methods, which can be viewed performing approximate inference on the corresponding energy-based model.

Tuomas Haarnoja, Haoran Tang, Pieter Abbeel, Sergey Levine• 2017

Related benchmarks

TaskDatasetResultRank
AntMujoco
Recovery Time (%)16.4
16
AllegroHandIsaac Gym
Recovery Time (%)0.213
8
Non-Stationary Reinforcement LearningToy Environments Non-Stationary
nAUC (Steady)0.9
8
FrankaCabinetIsaac Gym
Recovery Time (%)18.5
8
2d multi-goalTOY
Recovery Time (%)9.4
8
ANYmalIsaac Gym
Recovery Time17.7
8
HalfCheetahMujoco
Recovery Time (%) (Abrupt Change)11.8
8
HopperMujoco
Recovery Time (%)12.7
8
HumanoidMujoco
Recovery Time (%)16.3
8
HumanoidIsaac Gym
Recovery Time (%)18.2
8
Showing 10 of 16 rows

Other info

Follow for update