Is Markov’s inequality optimal? In a somewhat trivial sense, no. Suppose has mean . Consider . By Markov, , which is trivial. Since probabilities, are bounded by 1 we can always write

We can also arrive at this conclusion in a slightly more interesting way. For any nonrandom , write . Therefore,

Optimizing over gives on the right hand side.

Achieving Markov

The optimization perspective on Markov’s inequality demonstrates (though it can of course be proved more directly) that the distributions which attain Markov’s inequality take two values. More precisely, if and then takes the value 0 and . Consequently, the equality can hold only for a single threshold. That is, no distribution can attain for all .

Another way to state the optimality is that, for ,

This has consequences for concentration inequalities. In particular, if and are iid and each take two values, then can take three values. Therefore, Markov’s inequality cannot be tight for more than one random variable.

Achieving Chebyshev

Since for , Chebyshev’s inequality is obtained if Markov is attained by . This implies that random variables attaining equality in Chebyshev’s inequality can take 3 values: , and for some . That is, , , and .

As above, if are iid and each take two values, then can take three values. This implies that Chebyshev might be tight for sums of two variables. But if is a third iid rv, then can take four values, meaning that Chebyshev’s inequality cannot be tight for more than two random variables. This was first proved by Ghosh and Meedenin 1977: On the non-attainability of Chebyshev bounds.