Best Arm Identification with Fixed Budget: A Large Deviation Perspective
About
We consider the problem of identifying the best arm in stochastic Multi-Armed Bandits (MABs) using a fixed sampling budget. Characterizing the minimal instance-specific error probability for this problem constitutes one of the important remaining open problems in MABs. When arms are selected using a static sampling strategy, the error probability decays exponentially with the number of samples at a rate that can be explicitly derived via Large Deviation techniques. Analyzing the performance of algorithms with adaptive sampling strategies is however much more challenging. In this paper, we establish a connection between the Large Deviation Principle (LDP) satisfied by the empirical proportions of arm draws and that satisfied by the empirical arm rewards. This connection holds for any adaptive algorithm, and is leveraged (i) to improve error probability upper bounds of some existing algorithms, such as the celebrated \sr (Successive Rejects) algorithm \citep{audibert2010best}, and (ii) to devise and analyze new algorithms. In particular, we present \sred (Continuous Rejects), a truly adaptive algorithm that can reject arms in {\it any} round based on the observed empirical gaps between the rewards of various arms. Applying our Large Deviation results, we prove that \sred enjoys better performance guarantees than existing algorithms, including \sr. Extensive numerical experiments confirm this observation.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Best Arm Identification | Convex arm-to-reward function K=10 (test) | Error Rate0.62 | 15 | |
| Best Arm Identification | Convex arm-to-reward function K=20 (test) | Error Probability0.21 | 15 | |
| Best Arm Identification | Convex arm-to-reward function K=40 (test) | Error Rate0.25 | 15 | |
| Best Arm Identification | Two groups of suboptimal arms (K=10) | Error Probability0.99 | 15 | |
| Best Arm Identification | Two groups of suboptimal arms K=20 | Error Probability4.27 | 15 | |
| Best Arm Identification | Two groups of suboptimal arms (K=40) | Error Probability2.24 | 15 | |
| Best Arm Identification | 55-arm convex mapping bandit | Error Probability0.6 | 15 | |
| Best Arm Identification | Concave arm-to-reward function K=10 (synthetic) | Error Rate0.04 | 15 | |
| Best Arm Identification | Concave arm-to-reward function K=20 (synthetic) | Error Probability0.09 | 15 | |
| Best Arm Identification | Linear arm-to-reward function K=10 | Error Probability0.0055 | 15 |