The function

is nondecreasing in and (and continuous in for ). Therefore, for any random variable ,

Thus, we see that the Chernoff method corresponds to the case and Markov’s inequality corresponds to . Considering other values of can thus get us tighter inequalities than the Chernoff bound. Of course, for values of in the bound is much less nicer to work with, especially for sums of random variables.

While the Chernoff method requires that all moments exist (at least in some neighborhood), this method does not require that. Similarly to concentration via convex optimization, because this method is tighter than Chernoff, if we really need tight concentration in practice we should try it.

This observation gives rise to Bentkus’ inequality, and bounds which deploy have come to be known as Bentkus-style bounds.