Our new X account is live! Follow @wizwand_team for updates
WorkDL logo mark

Conformal Bandits: Bringing statistical validity and reward efficiency to the small-gap regime

About

We introduce Conformal Bandits, a novel framework integrating Conformal Prediction (CP) into bandit problems, a classic paradigm for sequential decision-making under uncertainty. Traditional regret-minimisation bandit strategies like Thompson Sampling and Upper Confidence Bound (UCB) typically rely on distributional assumptions or asymptotic guarantees; further, they remain largely focused on regret, neglecting their statistical properties. We address this gap. Through the adoption of CP, we bridge the regret-minimising potential of a decision-making bandit policy with statistical guarantees in the form of finite-time prediction coverage. We demonstrate the potential of it Conformal Bandits through simulation studies and an application to portfolio allocation, a typical small-gap regime, where differences in arm rewards are far too small for classical policies to achieve optimal regret bounds in finite sample. Motivated by this, we showcase our framework's practical advantage in terms of regret in small-gap settings, as well as its added value in achieving nominal coverage guarantees where classical UCB policies fail. Focusing on our application of interest, we further illustrate how integrating hidden Markov models to capture the regime-switching behaviour of financial markets, enhances the exploration-exploitation trade-off, and translates into higher risk-adjusted regret efficiency returns, while preserving coverage guarantees.

Simone Cuonzo, Nina Deliu• 2025

Related benchmarks

TaskDatasetResultRank
Bandit Interval Coverage and Width EvaluationGaussian reward scenario Delta = 0.5 synthetic (1,000 MC replicates)
Coverage Arm180.47
6
Bandit Interval Coverage and Width EvaluationStudent-t reward scenario (Delta = 0.5) synthetic (1,000 MC replicates)
Coverage Arm181.29
6
Bandit Interval Coverage and Width EvaluationSkew-t reward scenario Delta = 0.5 synthetic (1,000 MC replicates)
Coverage Arm180.3
6
Multi-armed bandit policy evaluationStudent-t reward distribution Δ = 0.05 synthetic (1,000 MC replicates)
Coverage (Arm1)81.29
6
Multi-armed bandit policy evaluationGaussian reward distribution (Δ = 0.05) 1,000 MC replicates synthetic
Coverage (Arm1)80.47
6
Multi-armed bandit policy evaluationSkew-t reward distribution Δ = 0.05 1,000 MC replicates synthetic
Coverage (Arm1)80.3
6
Portfolio ManagementFinancial Market Data Partial-information setting 2018-2025 (Evaluation Period)
Total Return90.67
5
Portfolio ManagementFull-information setting 2018–2025
Total Return176.1
4
Showing 8 of 8 rows

Other info

Follow for update