Does science need 'open evaluation' in addition to 'open access?'

abernard102@gmail.com 2012-11-19

Summary:

In an editorial accompanying an ebook titled 'Beyond open access: visions for open evaluation of scientific papers by post-publication peer review,' Nikolaus Kriegeskorte argues that scientists, not publishers, are in the best position to develop a fair evaluation process for scientific papers. The ebook, published today in Frontiers, compiles 18 peer-reviewed articles that lay out detailed visions on how an transparent, open evaluation (OE) system could work for the benefit of all science. This transparency is paramount because the evaluation process is the central steering mechanism of science and influences public policy as well. The authors are from a wide variety of disciplines including neuroscience, psychology, computer science, artificial intelligence, medicine, molecular biology, chemistry, and economics.  'Peer reviews should be made public information, like the scientific papers themselves. In a lot of ways, the network of scientific publications is similar to a neural network. Each paper or peer review could be seen as a neuron with excitatory and inhibitory connections, and this information is vital in judging the value of its results,' says Kriegeskorte, researcher at the University of Cambridge. Yet unlike the richly interactive and ongoing activity in a neural network, the current peer review process is typically limited to 2-4 reviewers and remains fossilized in pre-publication phase. According to Kriegeskorte, secretive and time-limited pre-publication peer review is no longer the optimal system. He writes, 'Open evaluation, an ongoing post-publication process of transparent peer review and rating of papers, promises to address the problems of the current system. However, it is unclear how exactly such a system should be designed.' To explore possible design solutions for OE, Kriegeskorte and his student Diana Deca launched a Research Topic at Frontiers—where a researcher chooses a topic and invites his or her peers to contribute an article. And while Kriegeskorte was expecting a diverging series of solutions, he says that the visions turned out to be largely convergent: the evaluation of papers should be completely transparent, post-publication, perpetually ongoing, and backed by modern statistical methods for inferring the quality of papers; and the system should provide a plurality of perspectives on the literature. According to Kriegeskorte, transparency is the antidote to corruption and bias. 'Science will continue to rely on peer review, because it needs explicit expert judgments, rather than media buzz, to evaluate papers.' He suggests a two-step process based on a fundamental division of powers. In the first step after a manuscript is published online, anyone can publicly post a review or rate the paper. In the second step, independent web-portals to the literature combine all the evaluations to give a prioritized perspective on the literature. The scoring system could simply be an average of all of the ratings. But different web-portals would weight varying scales and individual reviewers differently. In the end, he believes, 'the important thing is that scientists themselves take on the challenge of building the central steering mechanism for science: its evaluation system.'"

Link:

http://phys.org/news/2012-11-science-addition-access.html

From feeds:

Open Access Tracking Project (OATP) » abernard102@gmail.com

Tags:

oa.new oa.policies oa.peer_review oa.quality oa.frontiers oa.editorials

Date tagged:

11/19/2012, 13:55

Date published:

11/19/2012, 08:55