Language Log » The open access hoax and other failures of peer review

abernard102@gmail.com 2013-10-05

Curt Rice in the Guardian, "Open access publishing hoax: what Science magazine got wrong", 10/4/2013:

Science magazine has published a blistering critique of the most sacred cow of scientific research, namely the peer review quality system. Unfortunately, Science doesn't seem to have understood its own findings. It proclaims to have run a sting operation, written by 'gonzo scientist' John Bohannon, revealing the weaknesses of the relatively new publishing model we call open access. In fact, the Science article shows exactly the opposite of what it intended, namely that we need an even wider use of open access than the one we currently have.

The version published on Curt's web log ("What Science — and the Gonzo Scientist — got wrong: open access will make research better") closes with a list of links to other commentary on the Science article:

Science magazine rejects data, publishes anecdote, by Björn Brembs John Bohannon’s peer review sting against Science, by Mike Taylor New “sting” of weak open access journals, by Peter Suber I confess, I wrote the arsenic DNA paper to expose flaws in peer-review at subscription based journals, by Michael Eisen Science reporter spoofs hundreds of open access journals with fake papers, at the wonderfulRetraction Watch, by Ivan Oransky OASPA’s response to the recent article in Science, entitled “Whose afraid of Peer Review?”Press Release What Science‘s “sting operation” reveals: open access fiasco or peer review hellhole? by Kausik Datta Who’s afraid of open access? by Ernesto Priego Science Mag sting of OA journals: is it about open access or about peer review, by Jeroen Bosman

There are very important issues at stake here, and Curt has very worthwhile things to add to the discussion, so you should definitely read both the Science article and Curt's response.

What's this all about?

To start with, Science is definitely not an open-access journal — to read most articles, you need to be a member of the AAAS (at relatively modest prices ranging from $75/year for students to $151/year for "Professional Member" status), or have access to a library that subscribes.  Like other society-centered journals, Science is suffering from attrition in its membership rolls due to the simple fact that most potential members can get access through their university or company library, and would just as soon not pile up paper copies that clog their recycling bins. And like other non-open-access journals, Science is suffering from the moral and political assault of the open access movement, which variously argues that publicly-funded research reports should be accessible to the public, and that authors (who are not paid for their contributions) benefit from broader access to their writings, and (sometimes) that access to digital information should be priced at its marginal cost of reproduction, which in the case of scholarly and scientific publication is essentially zero.

Journals like Science have done several things in response. They've tried to keep down subscription prices — especially in comparison to the sometimes-exorbitant prices charged by commercial publishers like Reed Elsevier ; they've tried to offer additional value to members; they've allowed various forms of limited or delayed open access; and they've made anti-open-access counter arguments, of which the Bohannon article is an extreme example.

There are some stronger anti-open-access arguments. For example, there are non-zero costs associated with editing and managing a journal,  which are on the order of $1,000 per published paper. The commonest "open access" method to raise this money is the "Author Pays" model, in which the journal changes would-be authors a fee, usually if the paper is accepted for publication, but sometimes at the time of submission.  (The range of such fees has been about $400-$3500 in cases that I've encountered.)

There are two potential problems with the "Author Pays" model.

First, for authors who don't have grants or slush funds to pay such fees, the cost can be a problem.  There are fields where productive researchers expect to publish five to ten articles per year, so cumulative publicationfees might easily exceed $10,000 a year. This is essentially a problem in politico-economic restructuring — the billions of dollars now spent by libraries on journal subscriptions are the obvious place to look for the needed funds. But of course,  this kind of restructuring is extremely hard to arrange.

Second (and in my opinion more important), the "Author Pays" model is an invitation to chicanery and fraud. Starting more than a decade ago, we saw the proliferation of "spamferences" — ad hoc international conferences, organized by ad hoc international organizing committees, whose goal seems to be to persuade gullible researchers to pay substantial registration fees to present their papers, usually in resort locations (or places that sound like resorts, at least). The structure and the cash flow of these spamferences are not at all distinct from the annual meetings of reputable organizations — but there is nevertheless a difference. See "(Mis)Informing Science", 4/20/2005, and "Dear [Epithet] spamference organizer [Name]", 10/6/2010, for some discussion.

As a result of the blossoming of the Open Access movement, there has been a similar proliferation of journals on a continuum from those motivated by the best interests of humanity to out-and-out frauds. It appears a number of large and well-funded operations have started to mine this vein of ore, often with publications that are well out towards the "take the money and run" end of that spectrum.

John Bohannon demonstrated this by building an engine to create a large number of nonsense versions of a pretend scientific paper:

The goal was to create a credible but mundane scientific paper, one with such grave errors that a competent peer reviewer should easily identify it as flawed and unpublishable. Submitting identical papers to hundreds of journals would be asking for trouble. But the papers had to be similar enough that the outcomes between journals could be comparable. So I created a scientific version of Mad Libs.

The paper took this form: Molecule X from lichen species Y inhibits the growth of cancer cell Z. To substitute for those variables, I created a database of molecules, lichens, and cancer cell lines and wrote a computer program to generate hundreds of unique papers. Other than those differences, the scientific content of each paper is identical.

The fictitious authors are affiliated with fictitious African institutions. I generated the authors, such as Ocorrafoo M. L. Cobange, by randomly permuting African first and last names harvested from online databases, and then randomly adding middle initials. For the affiliations, such as the Wassee Institute of Medicine, I randomly combined Swahili words and African names with generic institutional words and African capital cities. My hope was that using developing world authors and institutions would arouse less suspicion if a curious editor were to find nothing about them on the Internet.

He then submitted these papers to a large number of (vaguely biomedical) journals:

Between January and August of 2013, I submitted papers at a rate of about 10 per week: one paper to a single journal for each publisher. I chose journals that most closely matched the paper's subject. First choice would be a journal of pharmaceutical science or cancer biology, followed by general medicine, biology, or chemistry. In the beginning, I used several Yahoo e-mail addresses for the submission process, before eventually creating my own e-mail service domain, afra-mail.com, to automate submission.

The results?

By the time Science went to press, 157 of the journals had accepted the paper and 98 had rejected it. Of the remaining 49 journals, 29 seem to be derelict: websites abandoned by their creators. Editors from the other 20 had e-mailed the fictitious corresponding authors stating that the paper was still under review; those, too, are excluded from this analysis. Acceptance took 40 days on average, compared to 24 days to elicit a rejection.

It has to be noted that traditional paid-access journals are always motivated to some degree by the desire for money: a modest (but intense) desire on the part of the staff of scientific and technical societies, and a more expansive and rapacious desire on the part of companies like Reed Elsevier. And regular readers of Language Log will have noticed that some pretty bad papers get published by non-open-access journals. This includes Science, where, alas, it's hard to think of any decent-quality linguistics papers that have appeared in recent years, and easy to think to several deeply embarrassing ones…

But the bad papers published in journals like Science typically present exciting-sounding results with fundamental conceptual or experimental flaws — they're not meaningless sham papers created by random substitution of names of languages, phonemes, lexical categories, etc., into typed slots in a "Mad Libs" framework.

So there's definitely a problem here. And the problem is compounded by the fact that the large-scale publishers of dubious open-access journals are being bought up by nominally reputable for-profit publishers. Bohannon notes that

Journals published by Elsevier, Wolters Kluwer, and Sage all accepted my bogus paper. Wolters Kluwer Health, the division responsible for the Medknow journals, "is committed to rigorous adherence to the peer-review processes and policies that comply with the latest recommendations of the International Committee of Medical Journal Editors and the World Association of Medical Editors," a Wolters Kluwer representative states in an e-mail. "We have taken immediate action and closed down the Journal of Natural Pharmaceuticals."

In 2012, Sage was named the Independent Publishers Guild Academic and Professional Publisher of the Year. The Sage publication that accepted my bogus paper is the Journal of International Medical Research. Without asking for any changes to the paper's scientific content, the journal sent an acceptance letter and an invoice for $3100. "I take full responsibility for the fact that this spoof paper slipped through the editing process," writes Editor-in-Chief Malcolm Lader, a professor of pschopharmacology at King's College London and a fellow of the Royal Society of Psychiatrists, in an e-mail. He notes, however, that acceptance would not have guaranteed publication: "The publishers requested payment because the second phase, the technical editing, is detailed and expensive. … Papers can still be rejected at this stage if inconsistencies are not clarified to the satisfaction of the journal." Lader argues that this sting has a broader, detrimental effect as well. "An element of trust must necessarily exist in research including that carried out in disadvantaged countries," he writes. "Your activities here detract from that trust."

The Elsevier journal that accepted the paper, Drug Invention Today, is not actually owned by Elsevier, says Tom Reller, vice president for Elsevier global corporate relations: "We publish it for someone else." In an e-mail to Science, the person listed on the journal's website as editor-in-chief, Raghavendra Kulkarni, a professor of pharmacy at the BLDEA College of Pharmacy in Bijapur, India, stated that he has "not had access to [the] editorial process by Elsevier" since April, when the journal's owner "started working on [the] editorial process." "We apply a set of criteria to all journals before they are hosted on the Elsevier platform," Reller says. As a result of the sting, he says, "we will conduct another review."

I was happy to see that a large open-access publisher for which I have a lot of respect, BioMed Central,  was apparently not one of those taken in by the sting. And BioMed Central is owned by Springer Science + Business Media, showing that mere ownership by a large European commercial publisher is not a guarantee of rapacity and intellectual fraud.

Returning to Curt Rice's piece in the Guardian, his main point is that the real problem is not the business models of publishers, it's the antiquated and disfunctional system of peer review:

Bad work gets published. This is a crisis for science and it's the crisis that Science shines a sharp light on this week. But Science misread the cause, which was not about making the results of research freely available via open access, but the meltdown of the peer review system. We need change. It's the digital age that allows that change, and the very best open access journals that are leading the development of new approaches to peer review.

Here are the basic facts about journal peer review as I perceive them to be:

(1) It doesn't maintain quality. A large amount of really, really bad work gets published, included (and even especially) at high-impact, top-rated journals.

(2) It slows down the pace of innovation, except in fields where journal publication has largely been abandoned as a communications method.  There are many journals (including for example Language) where the time from submission to publication may routinely be more than 18 months. As a result, in many fields of computer science, for example, journal publication has become a sort of quaint academic ritual, rather like wearing academic regalia at commencement — something that people do out of a sense of tradition and not because it has any remaining function. Instead, people learn about new ideas from conference presentations and conference proceedings (which are peer-reviewed, but in a very different way from journals), or from un-refereed web publication, as for instance on arXiv.org.

(3) Yes/No evaluative decisions are better made after an idea or result is published, and it's obvious that in the future some kind of social-media evaluation procedure will take care of this function.  Back-and-forth to modify articles before publication is sometimes worthwhile and sometimes just pointless dithering, but even when it's worthwhile, its value is decreased by the fact that it's hidden from everyone except the reviewer and the writer.  Open conversation after publication — perhaps resulting in new and improved versions — is a much more useful kind of communication.

(4) There are a complex set of problems about criticism, anonymity, retaliation, etc., and these are serious issues for  a more open evaluation process  – but in fact the current situation is full of Bad Stuff caused by the same motivations and dynamics, it's just hidden from view.

As Curt puts it:

The real problem for science today is quality control. Peer review has been at the heart of this, but there are too many failures – both in open access and traditional journals – simply to plod ahead with the same system. We need new approaches and numerous individuals and organisations are working on these, such as the open evaluation project.

The creative potential offered by digital communication of scientific results, an area in which open access journals are leading the way, is exactly where we need to focus. And if we do so, we will solve the problem of the broken peer review system that Science and the gonzo scientist have uncovered.