A powerful approach for estimating the mean of bounded random variables pioneered by Waudby-Smith and Ramdas and inspired by game-theoretic statistics. Their results are currently tightest known confidence intervals and confidence sequences for bounded observations with constant conditional means. In fact, betting-style CIs and CSs are nearly-optimal.

The idea is to leverage the duality between hypothesis tests and CIs. In particular, imagine:

- Betting on whether each value $m∈[0,1]$ is the mean.
- For values $m$ that aren’t the mean, we will make money (using the core ideas in game-theoretic hypothesis testing).
- Our confidence set at any time is the set of $m$ such that we haven’t made sufficient money while betting against them.

We can generate Hoeffding-like (light-tailed scalar concentration:Hoeffding bound) CSs and empirical Bernstein bounds in this way, but the most powerful results come from consider payoff functions of the form

$M_{t}(m)=i=1∏t (1+λ_{i}(X_{i}−m)),$which is a martingale if $(λ_{t})$ is a predictable sequence and a nonnegative test-martingale if $λ_{i}∈[−1/(1−m),1/m]$. If our bets $λ_{i}$ are chosen well, then this will grow large when $m$ is not the true mean of the $X_{i}$. When $m=μ$ is the mean, then by Ville’s inequality $M_{t}(μ)$ is unlikely to ever get large. Formally, a $(1−α)$-CS is achieved by taking

$C_{t}={m∈[0,1]:M_{t}(m)<1/α}.$How should we actually choose our bets? See betting strategies.