Let be a martingale wrt to the filtration . Assume is scalar-valued unless otherwise indicated. Here we investigate concentration inequalities for .

Note that martingale concentration inequalities generalize concentration inequalities for independent random variables (eg light-tailed scalar concentration), since we may take , in which the following bounds translate into bounds on .

While we state concrete, mostly fixed-time results here, we note that many of the following bounds were made time-uniform (and often tightened) using sub-psi processes.

Azuma-Hoeffding inequality

Assume that for all , i.e., the martingale has bounded increments. Then, for all ,

The natural one-sided versions of this inequality also exist. Note that is fixed in advance here (i.e., it is fixed-time result).

Dubins-Savage inequality

This is often considered Chebyshev’s inequality for martingales. If has conditional means , i.e., and conditional variances then for any ,

This is a time-uniform result.

Variance bound

If the martingale has bounded increments and the variance of the increments are also bounded, i.e.,

then we can modify Azuma’s bound to read

where , as long as . Why is this better than Azuma’s inequality? Since the increments are bounded by , a trivial bound on is . Thus we may assume that , which means the right hand side of the bound is tighter.

This was first proved by DA Grable in A Large Deviation Inequality for Functions of Independent, Multi-way Choices. A modern proof is given by Dubhasi and Panconesi in their textbook, Concentration of Measure for the Analysis of Randomized Algorithms, Chapter 8.

Variance bound — matrix version.

Suppose is a matrix-valued martingale (Hermitian Matrices). Let and suppose

Then, for ,

where each is a -matrix. This was first proved by David Gross: Recovering Low-Rank Matrices From Few Coefficients In Any Basis.

References