The new scientific revolution: Reproducibility at last - The Washington Post 2015-01-29


Diederik Stapel, a professor of social psychology in the Netherlands, had been a rock-star scientist — regularly appearing on television and publishing in top journals. Among his striking discoveries was that people exposed to litter and abandoned objects are more likely to be bigoted. And yet there was often something odd about Stapel’s research. When students asked to see the data behind his work, he couldn’t produce it readily. And colleagues would sometimes look at his data and think: It’s beautiful. Too beautiful. Most scientists have messy data, contradictory data, incomplete data, ambiguous data. This data was too good to be true.  In late 2011, Stapel admitted that he’d been fabricating data for many years.  The Stapel case was an outlier, an extreme example of scientific fraud. But this and several other high-profile cases of misconduct resonated in the scientific community because of a much broader, more pernicious problem: Too often, experimental results can’t be reproduced.  That doesn’t mean the results are fraudulent or even wrong. But in science, a result is supposed to be verifiable by a subsequent experiment. An irreproducible result is inherently squishy.  And so there’s a movement afoot, and building momentum rapidly. Roughly four centuries after the invention of the scientific method, the leaders of the scientific community are recalibrating their requirements, pushing for the sharing of data and greater experimental transparency.  Top-tier journals, such as Science and Nature, have announced new guidelines for the research they publish ... The pharmaceutical companies are part of this movement. Big Pharma has massive amounts of money at stake and wants to see more rigorous pre-clinical results from outside laboratories. The academic laboratories act as lead-generators for companies that make drugs and put them into clinical trials. Too often these leads turn out to be dead ends ... But Ivan Oransky, founder of the blog Retraction Watch, says data-sharing isn’t enough. The incentive structure in science remains a problem, because there is too much emphasis on getting published in top journals, he said. Science is competitive, funding is hard to get and tenure harder, and so even an honest researcher may wind up stretching the data to fit a publishable conclusion ..."


From feeds:

Open Access Tracking Project (OATP) »

Tags: oa.comment oa.prestige oa.impact oa.clinical_trials oa.retraction_watch oa.reproducubility oa.open_science

Date tagged:

01/29/2015, 09:08

Date published:

01/29/2015, 04:08