Publication Bias Calculator

Sources of Bias

This page provides an assessment of the likely false positive rates for published studies. Often analyses, especially in observational contexts, can be overly managed at the institution conducting the research, resulting in multiple inferential analyses, population carve-outs, endpoints, hypotheses and analysis methods, and subsequent choices among these on results worth publishing. Reasons why results do not see the light of day are that they are negative or neutral, not in the best interest of the organization, inconsistent with other published results emanating from the institution and contrary to opinions of influential external opinion leaders. Conclusions in such contexts should note that results are exploratory or hypothesis generating and that multiple analyses in addition to the reported results were conducted with nominal p-values presented without adjustments for multiple testing. Such acknowledgements are necessary, unless all conducted analyses agreed substantially, as data presentation for such data often patterns presentations for prospective controlled clinical trials. The calculator here looks at the consequence of just two of the reasons I mention, negative or neutral results, on the validity of results that do get published.

Publication Bias Calculator

The calculator uses Bayes’ Theorem and mathematical details are in this attached document. In the first tab of the calculator, you can enter the probabilities that analyses conducted at your institution will be published given a positive finding (a statistically significant result) and that for publishing given a negative finding (not significant). I have default values of 80% and 10%. The third entry is the nominal false positive rate used in the analysis – usually a two-sided 5%. You can enter the actual p-value you obtain in your analysis. The actual false positive rate corresponding to the nominal 5% is actually close to 30%. Further adjustments for any of the multiple analyses mentioned above would make the results even less credible. Young and Karr [Significance. 2011; Vol 8, #3: 116-120] looked at 52 claims from uncontrolled studies with significant results which were published in reputed journals like NEJM, JAMA and JNCI, and notes that none of these significant findings held up in randomized clinical trials – 5 were supported in the opposite direction.

Edit the blue cells in the spreadsheet and enter your data and the calculations in the bottom box of the spreadsheet will refresh.