We can use the paradigm of game-theoretic hypothesis testing to test forecasters. We can either bet against a single forecaster, or we can compare forecasters.

The setup is identical to hypothesis testing. A skeptic whose betting against the forecaster starts with wealth . At each round :

  • Forecaster issues a probability distribution over some event space of possible outcomes.
  • Skeptic issues a payoff function such that where is the -algebra containing the information of what’s happened so far.
  • Nature reveals an event
  • Skeptic’s wealth is updated as .

We are treating the forecaster’s posited distribution as the null. The alternative is the composite hypothesis that nature is not following the forecaster’s distribution (unless we want to test the forecaster against some particular hypothesis.)

Clearly, the question is how to choose , which depends on and .

Binary outcomes

An interesting and relevant case (because of all the superforecasting mumbo jumbo) is the case when is binary or finite. Eg . If we were testing against a particular alternative , this would reduce to a simple vs simple testing problem (testing by betting—simple vs simple). In general, we need to use the plug-in method or the mixture method because we don’t have a particular alternative in mind (see testing by betting—simple vs composite). I wrote a simple blog post about this here.