Neural Thompson Sampling
About
Thompson Sampling (TS) is one of the most effective algorithms for solving contextual multi-armed bandit problems. In this paper, we propose a new algorithm, called Neural Thompson Sampling, which adapts deep neural networks for both exploration and exploitation. At the core of our algorithm is a novel posterior distribution of the reward, where its mean is the neural network approximator, and its variance is built upon the neural tangent features of the corresponding neural network. We prove that, provided the underlying reward function is bounded, the proposed algorithm is guaranteed to achieve a cumulative regret of $\mathcal{O}(T^{1/2})$, which matches the regret of other contextual bandit algorithms in terms of total round number $T$. Experimental comparisons with other benchmark bandit algorithms on various data sets corroborate our theory.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Contextual Bandit | MagicTelescope OpenML | Final Cumulative Regret2.01e+3 | 13 | |
| Contextual Bandit | OpenML Adult T=10,000 | Cumulative Regret (Mean)1.76e+3 | 7 | |
| Contextual Bandit | OpenML Mushroom (T=8124) | Mean Cumulative Regret115 | 7 | |
| Contextual Bandit | OpenML Shuttle T=10,000 | Cumulative Regret (Mean)232 | 7 | |
| Contextual Bandit | Covertype (OpenML) | Final Cumulative Regret3.48e+3 | 6 | |
| Contextual Bandit | GasDrift OpenML | Final Cumulative Regret481.2 | 6 | |
| Contextual Bandit | MNIST OpenML | Cumulative Regret1.70e+3 | 6 | |
| Contextual Bandit Simulation | Friedman 2 | Final Cumulative Regret134.6 | 6 | |
| Contextual Bandit Simulation | Friedman3 | Final Cumulative Regret422.7 | 6 | |
| Contextual Bandit Simulation | Friedman Sparse and Disjoint | Final Cumulative Regret1.44e+3 | 6 |