On Moose and Medians (Or Why We Are Stuck With The Impact Factor) | The Scholarly Kitchen

abernard102@gmail.com 2016-04-13

Summary:

" ... At a recent publishing meeting, two senior biomedical scientists both lashed out at the Thomson Reuters representative sitting in the audience about how the Impact Factor was calculated and why the company couldn’t simply report median article citations. The TR representative talked about how averages (taken to three decimals) prevent ties and how her company provides many other metrics beyond the Impact Factor. They audience just about exploded, as it often does, and turned the discussion session into a series of pronouncements, counterpoints, and opinions ... The purpose of this post is not to further the antics and showmanship that accompany any discussion about metrics, but to describe why we are stuck with the Impact Factor — as it is currently calculated — for the foreseeable future. In the following figure, I plot total citations for 228 eLife articles published in 2013. You’ll first note that the distribution of citations to eLife papers is highly skewed. While the mean (average) citation performance was 22.1, the median (the point at which half of eLife papers do better and half do worse) was just 15.5. The difference between mean and median is the result of some very highly-cited eLife papers–one having received 321 citations to date ... So, if I can calculate a median citation performance for eLife, why can’t Thomson Reuters? Here’s why ..."

Link:

https://scholarlykitchen.sspnet.org/2016/04/12/on-moose-and-medians-or-why-we-are-stuck-with-the-impact-factor/

From feeds:

Open Access Tracking Project (OATP) » abernard102@gmail.com

Tags:

oa.new oa.comment oa.jif oa.impact oa.thomson_reuters oa.metrics

Date tagged:

04/13/2016, 07:49

Date published:

04/13/2016, 03:49