Double Thompson Sampling for Dueling Bandits
About
In this paper, we propose a Double Thompson Sampling (D-TS) algorithm for dueling bandit problems. As indicated by its name, D-TS selects both the first and the second candidates according to Thompson Sampling. Specifically, D-TS maintains a posterior distribution for the preference matrix, and chooses the pair of arms for comparison by sampling twice from the posterior distribution. This simple algorithm applies to general Copeland dueling bandits, including Condorcet dueling bandits as its special case. For general Copeland dueling bandits, we show that D-TS achieves $O(K^2 \log T)$ regret. For Condorcet dueling bandits, we further simplify the D-TS algorithm and show that the simplified D-TS algorithm achieves $O(K \log T + K^2 \log \log T)$ regret. Simulation results based on both synthetic and real-world data demonstrate the efficiency of the proposed D-TS algorithm.
Related benchmarks
| Task | Dataset | Result | Rank | |
|---|---|---|---|---|
| Dueling Bandits | Jester | Recovery Fraction46.7 | 15 | |
| Winner Determination | MovieLens | Cumulative Regret36.733 | 15 | |
| Dueling Bandits | MovieLens | Recovery Fraction13.3 | 15 | |
| Winner Determination | Synthetic | Cumulative Regret35.3 | 15 | |
| Best Arm Identification | Synthetic | True Rank of Reported Winner4.767 | 15 | |
| Best Arm Identification | Jester | True Rank of Reported Winner3.133 | 15 | |
| Best Arm Identification | MovieLens | True Rank9.233 | 15 | |
| Dueling Bandits | Synthetic | Recovery Fraction26.7 | 15 | |
| Winner Determination | Jester | Cumulative Regret34.067 | 15 |