A generalization of laws of large numbers to dependent processes, often arising from dynamical systems or stationary stochastic processes. Like LLNs, ergodic theorems say that the long-run behavior of averages stabilize in some way.

People who are deep in the weeds on this stuff like to draw a distinction between sample averages and time averages, for some reason that remains mysterious to me. They’ll say things like “LLNs concern sample averages, whereas ergodic theorems describe average of time of a stochastic process.” Well fine, but you can just as easily reimagine a sample average as a time average (what is sequential statistics all about, after all?).

I think a better conceptualization of the difference is just where the samples are coming from. If they’re coming from a sufficiently complex dynamic or stochastic process (a random iid sample is a still a stochastic process), then we call any theorem about the stabilization of averages an “ergodic theorem”.

I will add some examples when I understand them better.