An approach to the foundations of statistics. The Neyman-Pearson (NP) paradigm is focused on decision-making above all else, as opposed to simply quantifying evidence against the null as Fisher wanted (see Fisher’s paradigm).
NP believed that a null hypothesis should always be tested against an alternative hypothesis . They want to decide when and how to reject in favor of , and therefore introduce the notion of type-I error, type-II error, and power. The NP paradigm cashes out correct in terms of the frequentist principle (see frequentist statistics):
In repeated practical use of a statistical procedure, the long-run average actual error should not be greater than (and ideally should equal) the long-run average reported error.
- From Berger.
The NP procedure looks like:
- Define a test statistic based on the data.
- Reject if , where is some pre-chosen critical value. (Being pre-chosen is crucial, see issues with p-values).
- (Perhaps more generally, compute a p-value and reject if where is pre-chosen. Usually the p-value is computed by means of the test-statistic above).