Part of uncertainty quantification in which we have roughly the same goals as marginal consistency but we want to achieve our guarantees in a sequential setting (see uncertainty quantification in which we have roughly the same goals as marginal consistency but we want to achieve our guarantees in a sequential setting (see Sequential vs batch setting:uncertainty quantification:sequential vs batch for the precise setup).
In the online setting, we are interested in mean and quantile guarantees that have low regret. We say an algorithm has marginal mean consistency if
and marginal quantile consistency if
Marginal estimation in the online setting is not terribly interesting, especially in the mean case. If we just pick and ignore , then we achieve regret (if ). The adversary can be arbitrarily powerful here.
In the quantile case, we could simply predict for fraction of the rounds, and for the remaining fraction. This easy algorithm should make us suspicious of marginal consistency.
There’s also a somewhat more interesting algorithm in the quantile case (though still not terribly interesting, and it still ignores the features). If we do an update similar to online gradient descent, we can also achieve regret. In particular, we pick
So if we increase slightly, and if we lower slightly. This has regret bound