Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Deep Reinforcement Learning in a Handful of Trials using Probabilistic Dynamics Models

About

Model-based reinforcement learning (RL) algorithms can attain excellent sample efficiency, but often lag behind the best model-free algorithms in terms of asymptotic performance. This is especially true with high-capacity parametric function approximators, such as deep networks. In this paper, we study how to bridge this gap, by employing uncertainty-aware dynamics models. We propose a new algorithm called probabilistic ensembles with trajectory sampling (PETS) that combines uncertainty-aware deep network dynamics models with sampling-based uncertainty propagation. Our comparison to state-of-the-art model-based and model-free deep RL algorithms shows that our approach matches the asymptotic performance of model-free algorithms on several challenging benchmark tasks, while requiring significantly fewer samples (e.g., 8 and 125 times fewer samples than Soft Actor Critic and Proximal Policy Optimization respectively on the half-cheetah task).

Kurtland Chua, Roberto Calandra, Rowan McAllister, Sergey Levine• 2018

Related benchmarks

TaskDatasetResultRank
ControlBeta Tracking
Median Samples330
24
Continuous ControlPendulum
Median Samples5.6
12
Continuous Controlcartpole
Median Samples1.63
10
Object StackingStack In-distribution I (test)
Success Rate97.2
10
Crash AvoidanceCrash Composition C (test)
Success Rate37.1
10
Box/Door UnlockingUnlock In-distribution I (test)
Success Rate5.95e+3
10
Box/Door UnlockingUnlock Spuriousness S (test)
Success Rate20.6
10
Crash AvoidanceCrash In-distribution I (test)
Success Rate52.3
10
Crash AvoidanceCrash Spuriousness S (test)
Success Rate44.6
10
Object StackingStack Composition C (test)
Success Rate73.7
10
Showing 10 of 25 rows

Other info

Follow for update