# The Journal Impact Factor: how (not) to evaluate researchers

#### peter.suber's bookmarks 2018-10-18

### Summary:

"I find this practice already highly questionable. First of all, it appears the formula calculates a statistical mean. However, no article can receive less than 0 citations, while there is no upper limit to citations. Most articles – across all journal – receive only very few citations, and only a few may receive a lot of citations. This means we have a ‘skewed distribution’ when we plot how many papers received how many citations. The statistical mean, however, is not applicable for skewed distributions. Moreover, basic statistics and probability tell us that if you blindly choose one paper from a journal, it is impossible to predict -or even roughly estimate – its quality by the average citation rate, alone. It is further impossible to know the author’s actual contribution to said paper. Thus, we are already stacking three statistical fallacies by applying JIF to evaluate researchers.

But this is just the beginning! Journals don’t have an interest in the Journal Impact Factor as a tool for science evaluation. Their interest is in the advertising effect of the JIF. As we learn from our guest, Dr. Björn Brembs (professor for neurogenetics at University of Regensburg), journals negotiate with the private company Clarivate Analytics (in the past it was Reuters) that provides the numbers. Especially larger publishers have a lot of room to influence the numbers above and below the division line in their favor...."