Blog | @ccess | Sharing the results of scientific research 2012-03-14


"Who has never been in the situation that he had a set of data where some of them just didn’t seem to fit. A simple adjusting of the numbers or omitting of strange ones could solve the problem. Or so you would think. I certainly have been in such a situation more than once, and looking back, I am glad that I left the data unchanged. At least in one occasion my “petty” preformed theory proved to be wrong and the ‘strange data’ I had found were corresponding very well with another concept that I hadn’t thought of at the time. There has been a lot of attention in the media recently for cases of scientific fraud. Pharmaceutical companies are under fire for scientific misconduct (Tamiflu story), and in the Netherlands the proven cases of fraud by Stapel (social psychology), Smeesters (psychology) and Poldermans (medicine/cardiology) have resulted in official investigations into the details of malpractice by these scientists... A report with recommendations for preventing scientific fraud, called 'sharpening policy after Stapel'  was published by four Dutch social psychologists:  Paul van Lange (Amsterdam), Bram Buunk (Groningen), Naomi Ellemers (Leiden) and Daniel Wigboldus (Nijmegen). One of the report’s main recommendations is to share raw data and have them permanently stored safely and accessible for everyone... In this article I propose that for almost all of the instances where scientific misconduct was found, open access to articles AND raw data would have either prevented the fraud altogether, or at the very least would have caused them to be exposed much more rapidly than has been the case in the current situation. Especially in the field of medical research such a change can literally change lives.  To illustrate this point I want to make a distinction between different forms of ‘Bad Science’. On the author side we can have selective publishing (omitting data that do not fit one’s theory), non-reproducibility, data manipulation and at the far end of the spectrum even data fabrication. On the side of publishers we have publication bias (preferential publishing of positive results or data that confirm an existing theory), fake peer review and reviewers or editors pushing authors to make a clear story by omitting data (effectively resulting in selective publishing!)..."



08/16/2012, 08:34

From feeds:

Open Access Tracking Project (OATP) »
Open Access Tracking Project (OATP) » Connotea: tomolijhoek's bookmarks matching tag

Tags: openaccess oa.comment oa.copyright oa.crowd oa.impact oa.prestige oa.okfn oa.@access oa.business_models oa.publishers oa.quality oa.reports oa.reproducibility oa.recommendations oa.credibility oa.preservation oa.open_science oa.usa.ny oa.journals



Date tagged:

03/14/2012, 19:18

Date published:

03/16/2012, 08:34