PPPPredatory Article Counts: An Investigation Part 3 « Walt at Random

page_amanda's bookmarks 2015-11-12

Summary:

If you haven’t read Part 1 and Part 2—and, to be sure, Cites & Insights December 2015—none of this will make much sense. What would happen if I replicated the sampling techniques actually used in the study (to the extent that I understand the article)? I couldn’t precisely replicate the sampling. My working dataset had already been stripped of several thousand “journals” and quite a few “publishers,” and I took Beall’s lists a few months before Shen/Björk did. (In the end, the number of journals and “journals” in their study was less than 20% larger than in my earlier analysis, although there’s no way of knowing how many of those journals and “jour*nals” actually published anything. In any case, if the Shen/Björk numbers had been 20% or 25% larger than mine, I would have said “sounds reasonable” and let it go at that.) For each tier in the Shen/Björk article, I took two samples, both using random techniques, and for all but Tier 4, I used two projection techniques—one based on the number of active true gold OA journals in the tier, one based on all journals in the tier. (For Tier 4, singleton journals, there’s not enough difference between the two to matter much.) In each tier, I used a sample size and technique that followed the description in the Shen/Björk article. The results were interesting.

Link:

http://walt.lishost.org/2015/11/ppppredatory-article-counts-an-investigation-part-3/

From feeds:

Open Access Tracking Project (OATP) » page.amanda

Tags:

oa.new oa.gold oa.objections oa.peer_review oa.best_practices oa.credibility oa.predatory oa.journals

Date tagged:

11/12/2015, 12:05

Date published:

11/12/2015, 07:05