The Tangled Web of Scientific Publishing — The James G. Martin Center for Academic Renewal

peter.suber's bookmarks 2018-10-11

Summary:

"How to vet submitted manuscripts, how to reform the inefficient and in many ways corrupt, journal system, how best to limit costs; all are tricky questions. I offer no simple answers, but here are some suggestions as starting points for debate:

  • Consider abolishing the standard paper-journal structure. (The system I describe below also allows for aggregators to arise, selecting papers based on quality or interest area as a substitute for area-specific journals.)
  • Suppose that all submissions, suitably tagged with interest-area labels by the author, were instead to be sent to a central SUBMISSIONS repository. (A pirate site containing “scraped” published papers, Sci-Hub, already exists and there are already open access repositories where anyone can park files.)
  • Suppose there were a second central repository for prospective reviewers of the papers submitted to the first repository. Anyone who is interested in reviewing manuscripts − usually but not necessarily a working scientist—would be invited to submit his or her areas of interest and qualifications to this REVIEWER repository.
  • Reviewing would then consist of somehow matching up manuscripts with suitable reviewers. Exactly how this should be done needs to be debated; many details need to be worked out. How many reviewers? What areas? How much weight should be given to matching reviewers’ expertise, in general, and in relation to the manuscript to be reviewed? What about conflict of interest, etc.? But if rules could be agreed on, the process could probably be automated.
  • Reviewers would be asked to both comment on the submission and give it two scores: (a) validity—are the results true/replicable? (b) Importance—a more subjective judgment.
  • If a reviewer detects a remediable flaw, the manuscript author should have the opportunity to revise and resubmit and hope to get a higher score.
  • Manuscripts should always be publicly available unless withdrawn by the author. But after review, they will be tagged with the reviewers’ evaluation(s). No manuscript need be “rejected.”
  • Employers/reviewers looking at material to evaluate for promotion, salary review, etc. would then have to decide which category of reviewed manuscript to count as a “publication.” Some might see this as a problem—for employers if not for science. But publishing unreviewed material is well accepted in some areas of scholarship. The NBER, for example, has a section called “Working Papers” which “are circulated prior to publication for comment and discussion.”
  • Interested readers can search the database of manuscripts by publication date, reviewers’ scores, topics, etc. in a more flexible and unbiased way than current reliance on a handful of “gatekeeper” journals.

This is not a finished proposal. Each of these suggestions raises questions. But one thing is certain: the present system is slow, expensive, and inadequate. Science needs something better...."

Link:

https://www.jamesgmartin.center/2018/10/the-tangled-web-of-scientific-publishing/

From feeds:

Open Access Tracking Project (OATP) » peter.suber's bookmarks

Tags:

oa.new oa.publishing oa.recommendations oa.peer_review oa.monopoly oa.business_models

Date tagged:

10/11/2018, 09:43

Date published:

10/11/2018, 05:43