Let be a martingale wrt to the filtration . Assume is scalar-valued unless otherwise indicated. Here we investigate concentration inequalities for .
Note that martingale concentration inequalities generalize concentration inequalities for independent random variables (eg bounded scalar concentration), since we may take , in which the following bounds translate into bounds on .
While we state concrete, mostly fixed-time results here, we note that many of the following bounds were made time-uniform (and often tightened) using sub-psi processes.
See also matrix martingale inequalities.
Azuma-Hoeffding inequality
Assume that for all , i.e., the martingale has bounded increments. Then, for all ,
The natural one-sided versions of this inequality also exist. Note that is fixed in advance here (i.e., it is fixed-time result).
Dubins-Savage inequality
This is often considered Chebyshev’s inequality for martingales. If has conditional means , i.e., and conditional variances then for any ,
This is a time-uniform result. This result can also be generalized to infinite variance. If for , then
where is a constant dependent on . This was proven by Kahn in 2009.
Variance bound
If the martingale has bounded increments and the variance of the increments are also bounded, i.e.,
then we can modify Azuma’s bound to read
where , as long as . Why is this better than Azuma’s inequality? Since the increments are bounded by , a trivial bound on is . Thus we may assume that , which means the right hand side of the bound is tighter.
This was first proved by DA Grable in A Large Deviation Inequality for Functions of Independent, Multi-way Choices. A modern proof is given by Dubhasi and Panconesi in their textbook, Concentration of Measure for the Analysis of Randomized Algorithms, Chapter 8.
Bentkus-style inequality
Let ) be a supermartingale adapted to . If , then
Similarly to Bentkus’ inequality for scalar random variables, this improves over the Chernoff method. We can further bound this as
where and . The right hand side can be computed explicitly, or approximated, in some circumstances. See here.
Bentkus-style inequality for variance
In addition to the boundedness condition above, suppose we also have . Then we can write
As above, we can bound the right hand side with a worst case distribution. In particular, this time we have and .
References
- Concentration of Measure for the Analysis of Randomized Algorithms by Dubhashi and Panconesi.