Uniform convergence (UC) are a type of worst case, high probability concentration inequality that one might prove for a class of functions . Let consist of some set of functions (could replace with any Banach space in theory, but that’s rarely done). A UC bound has the form:

where and and are random variables.

Uniform convergence bounds are applied in empirical process theory to show that is a Glivenko-Cantelli class, for instance. They’re also applied in learning theory, eg PAC learning, to give finite-sample bounds on estimators.

Depending on the problem, we might find or first. Eg, that bound might have the form “for all , there exists some that goes to 0 as ” which implies that we obtain convergence in probability of . This would be the case when showing that is a Glivenko-Cantelli class. But in PAC learning, we usually want to fix first, and then find such that .

As is common in empirical process theory, will typically depend on some notion of complexity of , such as covering numbers, VC dimension, or Rademacher complexity.