Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Sample Efficient Reinforcement Learning with REINFORCE

About

Policy gradient methods are among the most effective methods for large-scale reinforcement learning, and their empirical success has prompted several works that develop the foundation of their global convergence theory. However, prior works have either required exact gradients or state-action visitation measure based mini-batch stochastic gradients with a diverging batch size, which limit their applicability in practical scenarios. In this paper, we consider classical policy gradient methods that compute an approximate gradient with a single trajectory or a fixed size mini-batch of trajectories under soft-max parametrization and log-barrier regularization, along with the widely-used REINFORCE gradient estimation procedure. By controlling the number of "bad" episodes and resorting to the classical doubling trick, we establish an anytime sub-linear high probability regret bound as well as almost sure global convergence of the average regret with an asymptotically sub-linear rate. These provide the first set of global convergence and sample efficiency results for the well-known REINFORCE algorithm and contribute to a better understanding of its performance in practice.

Junzi Zhang, Jongho Kim, Brendan O'Donoghue, Stephen Boyd• 2020

Related benchmarks

TaskDatasetResultRank
Policy OptimizationMulti-Armed Bandits
Sample Complexity-6
8
Showing 1 of 1 rows

Other info

Follow for update