Many basic inequalities are unimprovable (optimality of Markov and Chebyshev), in the sense that there are distributions which make them equalities. However, if you enlarge the probability space via external randomization, then you can make some of these inequalities tighter.

All of these examples come from the paper Randomized and exchangeable improvements of Markov’s, Chebyshev’s and Chernoff’s inequalities.

Markov

For nonnegative, and independent of ,

which strengthens Markov’s inequality. We emphasize that must be independent of . There is also an additive version: For and ,

which again strengthens Markov.

Chebyshev

For iid and independent of ,

which strengthens Chebyshev’s inequality.

Hoeffding

For and independent of , we have

which is stronger than Hoeffding’s bound since . The proof works by applying the Chernoff method to the randomized Markov inequality above. In the same, many of the other bounds in bounded scalar concentration can be randomized, including Bentkus’ inequality and Hoeffding’s bound remastered.

Ville

There also exists a randomized version of Ville’s inequality, though it requires careful interpretation. For a nonnegative supermartingale with with , any stopping time and and drawn independently of and , we have

In terms of sequential hypothesis testing, this should be interpreted as using test martingales in the usual way. If one stops at time for any reason and has not yet rejected (the budget runs out, say), then one can do a final test and check if the wealth is greater than instead of just . If so, then one can reject. However, one cannot do this and then continue the procedure—this has to be the final step.