Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

A minimax and asymptotically optimal algorithm for stochastic bandits

About

We propose the kl-UCB ++ algorithm for regret minimization in stochastic bandit models with exponential families of distributions. We prove that it is simultaneously asymptotically optimal (in the sense of Lai and Robbins' lower bound) and minimax optimal. This is the first algorithm proved to enjoy these two properties at the same time. This work thus merges two different lines of research with simple and clear proofs.

Pierre M\'enard, Aur\'elien Garivier (1) __INSTITUTION_2__ IMT)• 2017

Related benchmarks

TaskDatasetResultRank
Regret MinimizationK-armed bandits Exponential Family rewards
Finite-Time Regret (Minimax Ratio)1
2
Showing 1 of 1 rows

Other info

Follow for update