Science loses credibility if studies can’t be replicated

An increasing proportion of the scientific literature is irreproducible, with major consequences for progress

My task today is difficult. I must discuss a serious internal problem in science – it seems that a significant fraction of the results published in the scientific literature is unreliable. A comprehensive summary of the problem can be read in the Economist of October 19th.

Science proposes natural explanations (hypotheses) for natural processes, tests these hypotheses by experiment and publishes the results in sufficient detail to allow other scientists to replicate the work and to advance the study from there. But the disturbing thing now is that irreproducibility of published scientific work is widespread.

For example, scientists recently tried to replicate 53 landmark studies in cancer research but only succeeded in six cases (CG Begley and LM Ellis, Nature, Vol 483, pp 531 -533, 2012). The Economist quotes an anonymous National Institutes of Health spokesman as predicting that researchers would now find it hard to reproduce at least three-quarters of all published biomedical findings.

Huge amounts of public money are spent on basic research.. In 2012 the OECD countries spent $59 billion on biomedical research. One justification for this spend is that basic research forms the basis for private-sector drug development. But this justification cannot apply if the academic research is unreliable.

READ MORE

Of course, science is not infallible, and mistakes will occur from time to time. Science counters this observation by claiming that, over time, errors are corrected as other scientists take up the work. However, so many faulty papers are now being published that many are never corrected or withdrawn.

When analysing experimental data scientists use statistics to see how likely it is that data that supports the hypothesis could have come about simply by chance – false positive. If the likelihood is less than 5 per cent, they deem the evidence to be “statistically significant” – that is, to support the hypothesis. Thus, at most, only one published paper in 20 should report a false positive result.


Wildly optimistic
John Ioannidis, an epidemiologist from Stanford University, claims that the estimation of only one in 20 papers reporting a false positive result is wildly optimistic, because the usual approach to statistical significance ignores several things, principally the statistical power of the study (measure of ability to detect errors). He argues on the basis of statistical logic that "most published work research findings are probably false".

The situation is exacerbated by the professional culture of scientific research. Science is expensive, money is scarce and only a fraction of applications to funding bodies are successful. A scientist has little or no hope of winning research grants, thereby building career prospects and professional reputation, unless he/she has a good publication record. The “publish or perish” maxim encourages many scientists to publish material before it is ready. It encourages others to massage or even fabricate data.

But, what about the peer-review process, whereby all papers are reviewed by expert scientists for quality before publication? Unfortunately reviewers are not good at spotting mistakes. Biologist John Bohannon recently submitted a pseudonymous paper to 304 peer-reviewed journals, in the name of a fictitious researcher from a fictitious university. The paper was a fabrication, full of design, analysis and interpretative faults, but nevertheless it was accepted by 157 journals (Science, Vol 342, No6154,
pp 60-65, 4th Oct 2013).

Of course, no need to panic. Science continues to do great work – diseases are cured and probes land on Mars. However, the situation I outlined in this article must be quickly corrected. Science enjoys great public trust, but this could be easily lost. There is little cost to getting things wrong but great cost to not getting published. This equation needs to be rebalanced. It should not be possible to easily walk away from shoddy publications.

One suggestion is that science should emulate the auditing mechanism used to encourage income-tax compliance, checking published work for reproducibility. This would cost money, and funding agencies are reluctant to spend money that does not directly produce new knowledge. They don’t want to slow science down. Presumably if they had charge of road safety they would abolish speed limits to keep the traffic moving smartly along.


William Reville is an emeritus
professor of biochemistry and public awareness of science officer at UCC.
understandingscience.ucc.ie