Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Minimax Regret Bounds for Reinforcement Learning

About

We consider the problem of provably optimal exploration in reinforcement learning for finite horizon MDPs. We show that an optimistic modification to value iteration achieves a regret bound of $\tilde{O}( \sqrt{HSAT} + H^2S^2A+H\sqrt{T})$ where $H$ is the time horizon, $S$ the number of states, $A$ the number of actions and $T$ the number of time-steps. This result improves over the best previous known bound $\tilde{O}(HS \sqrt{AT})$ achieved by the UCRL2 algorithm of Jaksch et al., 2010. The key significance of our new results is that when $T\geq H^3S^3A$ and $SA\geq H$, it leads to a regret of $\tilde{O}(\sqrt{HSAT})$ that matches the established lower bound of $\Omega(\sqrt{HSAT})$ up to a logarithmic factor. Our analysis contains two key insights. We use careful application of concentration inequalities to the optimal value function as a whole, rather than to the transitions probabilities (to improve scaling in $S$), and we define Bernstein-based "exploration bonuses" that use the empirical variance of the estimated values at the next states (to improve scaling in $H$).

Mohammad Gheshlaghi Azar, Ian Osband, R\'emi Munos• 2017

Related benchmarks

TaskDatasetResultRank
Policy OptimizationOffice World MAP0
Avg Training Steps1.42e+5
18
Instruction FollowingBabyAI BossLevel
Success Rate36.4
14
Instruction FollowingBabyAI Synthseq
Average Episodic Reward0.361
7
NavigationMiniGrid Four Rooms
Average Episodic Reward0.672
7
BosslevelBabyAI
Average Pass Rate0.282
7
Instruction FollowingBabyAI Goto
Average Episodic Reward0.538
7
Instruction FollowingBabyAI Pickup
Average Episodic Reward0.391
7
Policy OptimizationOffice World MAP4
Average Training Steps8.02e+4
7
Policy OptimizationOffice World Map 4 Exp 6
Average Training Steps8.02e+4
7
SynthseqBabyAI
Average Pass Rate26.2
7
Showing 10 of 17 rows

Other info

Follow for update