Share your thoughts, 1 month free Claude Pro on usSee more
WorkDL logo mark

Revisiting Simple Regret: Fast Rates for Returning a Good Arm

About

Simple regret is a natural and parameter-free performance criterion for pure exploration in multi-armed bandits yet is less popular than the probability of missing the best arm or an $\epsilon$-good arm, perhaps due to lack of easy ways to characterize it. In this paper, we make significant progress on minimizing simple regret in both data-rich ($T\ge n$) and data-poor regime ($T \le n$) where $n$ is the number of arms, and $T$ is the number of samples. At its heart is our improved instance-dependent analysis of the well-known Sequential Halving (SH) algorithm, where we bound the probability of returning an arm whose mean reward is not within $\epsilon$ from the best (i.e., not $\epsilon$-good) for \textit{any} choice of $\epsilon>0$, although $\epsilon$ is not an input to SH. Our bound not only leads to an optimal worst-case simple regret bound of $\sqrt{n/T}$ up to logarithmic factors but also essentially matches the instance-dependent lower bound for returning an $\epsilon$-good arm reported by Katz-Samuels and Jamieson (2020). For the more challenging data-poor regime, we propose Bracketing SH (BSH) that enjoys the same improvement even without sampling each arm at least once. Our empirical study shows that BSH outperforms existing methods on real-world tasks.

Yao Zhao, Connor James Stephens, Csaba Szepesv\'ari, Kwang-Sung Jun• 2022

Related benchmarks

TaskDatasetResultRank
Best Arm Identification10 Synthetic Gaussian Instances K=40 arms
H123.4
10
Best Arm IdentificationSynthetic Multi-Armed Bandit (N=1,024, T=10,240) T = N log2 N (test)
Avg Simple Regret0.1531
6
Best Arm IdentificationSynthetic Bandit (N=128, T=896)
Average Simple Regret0.0992
6
Showing 3 of 3 rows

Other info

Follow for update