Bayes factors can be considered as a Bayesian (Bayesian statistics) analogue of the likelihood-ratio test. They are often used in hypothesis testing and model selection. Along with p-values and e-values, they are considered measures of evidence against the null.
For parameters (perhaps representing different hypotheses or models) and given data , the Bayes factor is
where the second equality follows from Bayes’ theorem if we place prior over the parameter space. Thus, it is really the second equality where Bayesianism enters the picture—the first equality is simply a likelihood ratio and need not be Bayesian at all.
Jeffreys (a Bayesian) gave a table summarizing how much evidence is provided by different values of . So did Kass and Raftery in 1995. This is pretty silly, as it depends on the application and what actions we’re considering on the basis of .