Biometrical Letters Vol. 58(1), 2021, pp. 1-26


Show full-size cover
THE PEARSON BAYES FACTOR: AN ANALYTIC FORMULA FOR
COMPUTING EVIDENTIAL VALUE FROM MINIMAL SUMMARY STATISTICS


Thomas J. Faulkenberry

Department of Psychological Sciences, Tarleton State University, Stephenville, Texas,
76402, USA, e-mail: faulkenberry@tarleton.edu


In Bayesian hypothesis testing, evidence for a statistical model is quantified by the Bayes factor, which represents the relative likelihood of observed data under that model compared to another competing model. In general, computing Bayes factors is difficult, as computing the marginal likelihood of data under a given model requires integrating over a prior distribution of model parameters. In this paper, I capitalize on a particular choice of prior distribution that allows the Bayes factor to be expressed without integral representation, and I develop a simple formula - the Pearson Bayes factor - that requires only minimal summary statistics as commonly reported in scientific papers, such as the t or F score and the degrees of freedom. In addition to presenting this new result, I provide several examples of its use and report a simulation study validating its performance. Importantly, the Pearson Bayes factor gives applied researchers the ability to compute exact Bayes factors from minimal summary data, and thus easily assess the evidential value of any data for which these summary statistics are provided, even when the original data is not available.


Bayesian statistics, Bayes factor, Pearson Type VI distribution, summary statistics, t-test; analysis of variance