Can We Trust the Science? The Challenge of Irreproducible Results

Legal Planet: Environmental Law and Policy 2015-08-31

In the peer review process, articles submitted to scientific journals are sent to experts in the field who then assess the methodology, results and conclusions. Based on their feedback, authors often revise and re-submit, publishing an improved article as a result. Peer reviewers rarely attempt the actual experiments described in the paper.  Irreproducible results are always a potential problem. Indeed, there is even a satirical journal dedicated to the issue (including a Graph That Proves All Theories).

The general assumption has been that irreproducible results (whether through fraud or error) will eventually be exposed by others working in the field. High-profile cases of fraud are often reported in the press, such as in cancer research or stem cell research. Despite climate deniers’ claims of shoddy research or worse behind research on climate change, it has widely been assumed that irreproducible results were low frequency occurrences. A recent study, reported in the Washington Post, however, suggests this may be a much bigger problem than recognized.

Psychologists sought to reproduce the experimental results of 100 experimental studies. They could reproduce the data only 39 times. Put another way, almost two-thirds of the published studies in their sample could not be independently verified. The senior editor of Science reacted with the defense that “This somewhat disappointing outcome does not speak directly to the validity or the falsity of the theories. What it does say is that we should be less confident about many of the experimental results that were provided as empirical evidence in support of those theories.” Somewhat disappointing outcome? Sounds like putting lipstick on a pig.

Scientific data are where the rubber meets the road in environmental law. Should pollutant standards be strengthened? Should a particular species be listed under the ESA? Should we regulate Dimethyl Terrible? All of these questions turn on the underlying research data. If the data themselves are suspect, the policy consequences will be, as well.

A single study in psychology research surely does not undermine the state of modern science. One hundred experiments may be too small a sample size. But this article reinforces fundamental concerns that have been voiced by well-respected researchers in recent years. One hopes that other researchers will assess the reproducibility of results in their fields, as well. Whether the major peer review journals will respond adequately  remains to be seen (the scientific journal Nature dedicated a website to this problem). One could imagine EPA’s Science Advisory Boards having to address this concern in the near future, not to mention Congressional hearings about “sound science.” However it develops, this is an issue that will not go away soon.