Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

On the model-based stochastic value gradient for continuous reinforcement learning

About

For over a decade, model-based reinforcement learning has been seen as a way to leverage control-based domain knowledge to improve the sample-efficiency of reinforcement learning agents. While model-based agents are conceptually appealing, their policies tend to lag behind those of model-free agents in terms of final reward, especially in non-trivial environments. In response, researchers have proposed model-based agents with increasingly complex components, from ensembles of probabilistic dynamics models, to heuristics for mitigating model error. In a reversal of this trend, we show that simple model-based agents can be derived from existing ideas that not only match, but outperform state-of-the-art model-free agents in terms of both sample-efficiency and final reward. We find that a model-free soft value estimate for policy evaluation and a model-based stochastic value gradient for policy improvement is an effective combination, achieving state-of-the-art results on a high-dimensional humanoid control task, which most model-based agents are unable to solve. Our findings suggest that model-based policy evaluation deserves closer attention.

Brandon Amos, Samuel Stanton, Denis Yarats, Andrew Gordon Wilson• 2020

Related benchmarks

TaskDatasetResultRank
Aggregate EfficiencyGYM
Runtime Ratio (vs DHMBPO)1.6
3
AntGYM
Runtime (hours)5.2
3
HalfCheetahGYM
Runtime (hours)6.7
3
HopperGYM
Runtime (hours)6.7
3
HumanoidGYM
Runtime (hours)6.9
3
Walker2dGYM
Runtime (hours)6.6
3
Showing 6 of 6 rows

Other info

Follow for update