The Effect of Open Access upon Citation Impact

abernard102@gmail.com 2012-03-22

Summary:

“The debate about the effects of open access upon the visibility or impact of scientific publications started with the publication by Steve Lawrence (2001) in the journal Nature, entitled ‘Free online availability substantially increases a paper's impact’, analyzing conference proceedings in the field computer science. Open access is not used to indicate the publisher business model based on the ‘authors pay’ principle, but, more generally, in the sense of being freely available via the Web. From a methodological point of view, the debate focuses on biases, control groups, sampling, and the degree to which conclusions from case studies can be generalized. This note does not give a complete overview of studies that were published during the past decade but highlights key events. In 2004, Stevan Harnad and Tim Brody (2004) claimed that physics articles submitted as pre-print to ArXiv (a preprint server covering mainly physics, hosted by Cornell University), and later published in peer reviewed journals, generated a citation impact up to 400% higher than papers in the same journals that had not been posted in ArXiv. Michael Kurtz and his colleagues (Kurtz et al., 2005) found in a study on astronomy evidence of a selection bias – authors post their best articles freely on the Web -  and an early view effect – articles deposited as preprints are published earlier and are therefore cited more often. Henk Moed (2007) found for articles in solid state physics that these two effects may explain a large part, if not all of the differences in citation impact between journal articles posted as pre-print in ArXiv and papers that were not. In a randomized control trail related to open versus subscription-based access of articles in psychological journals published by one single publisher, Phil Davis and his colleagues (Davis et al, 2008) did not find a significant effect of open access on citations. In order to correct for selection bias, a new study by Harnad and his team (Gargouri et al., 2010) compared self-selective self archiving with mandatory self archiving in four particular research institutions. They argued that, although the first type may be subject to a quality bias, the second can be assumed to occur regardless of the quality of the papers. They found that the OA advantage proved just as high for both, and concluded that it is real, independent and causal... they also found for the four institutions that the percentage of their publication output actually self-archived was, at most, 60%, and that for some it did not increase when their OA regime was transformed from non-mandatory into mandatory.  Therefore, what the authors labeled as ‘mandated OA’ is in reality, to a large extent, subject to the same type of self selection bias as non-mandated OA. On the other hand, it should be noted that all citation based studies mentioned above seem to have the following bias: they were based on citation analysis carried out in a citation index with a selective coverage of the good, international journals in their fields... Those who publish in the selected set of good, international journals – a necessary condition for citations to be recorded in the OA advantage studies mentioned above – will tend to have access to these journals anyway. In other words: there may be a positive effect of OA upon citation impact, but it is not visible in the database used. The use of a citation index with more comprehensive coverage would enable one to examine the effect of the citation impact of covered journals upon OA citation advantage; for instance: is such an advantage more visible in lower impact or more nationally oriented journals than it is in international top journals? ... Analyzing article downloads (usage) is a complementary and, in principle, valuable method for studying the effects of OA. In fact, the study by Phil Davis and colleagues mentioned above did apply this method and reported that OA articles were downloaded more often than papers with subscription-based access. However, significant limitations of this method are that not all publication archives provide reliable download statistics, and that different publication archives that do generate such statistics may apply different ways to record and/or count downloads, so that results are not directly comparable across archives. The implication seems to be that usage studies of OA advantage comparing OA with non-OA articles can be applied only in ‘hybrid’ environments, in which publishers offer authors who submit a manuscript both an ‘authors pay’ and a ‘readers pay’ option. But this type of OA may not be representative for OA in general, as it disregards self-archiving in OA repositories that are being created in research institutions all over the world...”

Link:

http://editorsupdate.elsevier.com/2012/03/the-effect-of-open-access-upon-citation-impact/

Updated:

08/16/2012, 06:08

From feeds:

Open Access Tracking Project (OATP) » abernard102@gmail.com

Tags:

oa.new oa.impact oa.npg oa.gold oa.business_models oa.publishers oa.comment oa.green oa.elsevier oa.ir oa.arxiv oa.metrics oa.usage oa.prestige oa.hybrid oa.citations oa.studies oa.preprints oa.debates oa.case.impact oa.repositories oa.versions oa.journals

Authors:

abernard

Date tagged:

03/22/2012, 23:06

Date published:

03/23/2012, 18:34