Anti-tutorial: how to design and execute a really bad study | Sauropod Vertebra Picture of the Week

abernard102@gmail.com 2013-10-07

Summary:

"Suppose, hypothetically, that you worked for an organisation whose nominal goal is the advancement of science, but which has mutated into a highly profitable subscription-based publisher. And suppose you wanted to construct a study that showed the alternative — open-access publishing — is inferior. What would you do? You might decide that a good way to test publishers is by sending them an obviously flawed paper and seeing whether their peer-review weeds it out. But you wouldn’t want to risk showing up subscription publishers. So the first thing you’d do is decide up front not to send your flawed paper to any subscription journals. You might justify this by saying something like “the turnaround time for traditional journals is usually months and sometimes more than a year. How could I ever pull off a representative sample?“. Next, you’d need to choose a set of open-access journals to send it to. At this point, you would carefully avoid consulting the membership list of the Open Access Scholarly Publishers Association, since that list has specific criteria and members have to adhere to a code of conduct. You don’t want the good open-access journals — they won’t give you the result you want. Instead, you would draw your list of publishers from the much broader Directory of Open Access Journals, since that started out as a catalogue rather than a whitelist. (That’s changing, and journals are now being cut from the list faster than they’re being added, but lots of old entries are still in place.) Then, to help remove many of the publishers that are in the game only to advance research, you’d trim out all the journals that don’t levy an article processing charge. But the resulting list might still have an inconveniently high proportion of quality journals. So you would bring down the quality by adding in known-bad publishers from Beall’s list of predatory open-access publishers. Having established your sample, you’d then send the fake papers, wait for the journals’ responses, and gather your results. To make sure you get a good, impressive result that will have a lot of “impact”, you might find it necessary to discard some inconvenient data points, omitting from the results some open-access journals that rejected the paper. Now you have your results, it’s time to spin them. Use sweeping, unsupported generalisations like “Most of the players are murky. The identity and location of the journals’ editors, as well as the financial workings of their publishers, are often purposefully obscured.” Suppose you have a quote from the scientist whose experiences triggered the whole project, and he said something inconvenient like “If [you] had targeted traditional, subscription-based journals, I strongly suspect you would get the same result”. Just rewrite it to say “if you had targeted the bottom tier of traditional, subscription-based journals”. Now you have the results you want — but how will you ever get through through peer-review, when your bias is so obvious? Simple: don’t submit your article for peer-review at all. Classify it as journalism, so you don’t need to go through review, nor to get ethical approval for the enormous amount of editors’ and reviewers’ time you’ve wasted — but publish it in a journal that’s known internationally for peer-reviewed research, so that uncritical journalists will leap to your favoured conclusion. Last but not least, write a press-release that casts the whole study as being about the “Wild West” of Open-Access Publishing ..."

Link:

http://svpow.com/2013/10/07/anti-tutorial-how-to-design-and-execute-a-really-bad-study/

From feeds:

Open Access Tracking Project (OATP) » abernard102@gmail.com

Tags:

oa.new oa.gold oa.comment oa.quality oa.bealls_list oa.doaj oa.credibility oa.oaspa oa.predatory oa.peer_review oa.journals

Date tagged:

10/07/2013, 20:32

Date published:

10/07/2013, 16:32