When constructing confidence sequences using either the method of mixtures or predictable plug-ins, one often ends up with rates on the order of . While this is clearly shrinking to zero at , it’s worth asking if it’s shrinking to zero at an optimal rate. If the problem satisfies a LIL (laws of the iterated logarithm), then the optimal rate is .
Obtaining bounds that shrink at this rate can be done via stitching (see time-uniform, nonparametric, nonasymptotic confidence sequences). This involves deploying distinct bounds over different epochs, and then taking a union bound to ensure time-uniform coverage. Eg, one might consider the epochs , and deploy an estimator with the guarantee
where is some parameter of interest. If , then we can get time-uniform coverage with probability for all via a union bound. The goal is then to show that shrinks at the desired logarithm rate. There are many examples of this, e.g. here, here, here, and here.
Duchi and Haque applied stitching to non-time-uniform estimator and still achieve iterated logarithm rates. In other words, they give a general reduction from fixed-time bounds to time-uniform bounds at the loss of only an iterated logarithm rate. In particular, they show that if for all , there exists a fixed-time estimator of such that
for some function , then the estimator where satisfies
That is, by updating the estimator only once every timesteps for , we can transform any fixed-time estimator into a time-uniform estimator at an iterated logarithm cost.
While an elegant theoretical construction, it is somewhat dissatisfying for practical purposes as we only update the estimator every timesteps. If data collection stops at some time for , then we must discard observations (indeed, possibly as many as observations). This is unlike the examples above, which use time-uniform estimates within each epoch, which allow the estimator to be updated after observation is received.