SPECIAL SESSION: Replicability in Plant Pathology: Do we have a problem? - Panel Discussion

**The role of P-values, power, and meta-analysis when thinking about replicability in science**

Laurence Madden - The Ohio State University.

Scientists are used to uncertainty. We know that we can make false positive and false negative decisions, with the knowledge that future studies will correct past mistakes. However, in an effort to publish, get grants, and make an impact, scientists may discount or ignore the uncertainty of statistical-test results. It has been argued that the very common misinterpretation of P values (significance levels) is responsible for many current problems. If there is no prior information about whether a treatment effect is true, and a P of 0.05 is achieved in a study (the popular cut-off), the posterior probability of the null hypothesis (H

_{0}) being true is approximately 0.29, which means that the probability that the alternative hypothesis is true is only 0.71, hardly compelling evidence to reject H_{0}. Thus, there are many false positives in science. To reduce these types of errors, some authors have proposed reducing the P value threshold for significance to 0.005, but this greatly reduces the power to detect the alternative hypothesis (which is usually the one of scientific interest) when it is true. Increasing power through improved experimental designs is highly desirable, when possible. Alternatively, there is great value in combining results from multiple studies in a meta-analysis to determine the expected value and distribution of treatment effects; this approach can result in high power overall, even with imprecise individual studies.