Altmetrics and Research Assessment: How Not to Let History Repeat Itself | The Scholarly Kitchen

abernard102@gmail.com 2015-09-02

Summary:

"It seems that broadly, everybody agrees that the Impact Factor is a poor way to measure research quality. The most important objection is that it is designed to measure the academic impact of journals, and is therefore only a rough proxy for the quality of the research contained within those journals. As a result, article-level metrics are becoming increasingly common and are supported by Web of Science, Scopus and Google Scholar. There are also a number of alternative ways to measure citation impact for researchers themselves. In 2005 Jorge Hirsch, a physicist from UCSD, proposed the h-index, which is intended to be a direct measure of a researcher’s academic impact through citations. There are also a range of alternatives and refinements with names like m-index, c-index, and s-index, each with their own particular spin on how best to calculate individual contribution. While the h-index and similar metrics are good attempts to tackle the problem of the impact factor being a proxy measure of research quality, they can’t speak to a problem that has been identified over the last few years and is becoming known as the Evaluation Gap ... The Evaluation Gap is a concept that was introduced in a 2014 post by Paul Wouters, on the citation culture blog which he co-authors with Sarah de Rijcke, both of whom are scholars at the University of Leiden. The idea of the gap is summed up by Prof Wouters as: '…the emergence of a more fundamental gap between on the one hand the dominant criteria in scientific quality control (in peer review as well as in metrics approaches), and on the other hand the new roles of research in society.' In other words, research plays many different roles in society ... In April of this year, the Leiden manifesto, which was written by Diana Hicks and Paul Wouters, waspublished in nature. There has been surprisingly little discussion about it in publishing circles. It certainly seems to have been met with less buzz than the now iconic altmetrics manifesto, which Jason Priem et al., published in 2010.  As Cassidy Sugimoto (@csugimoto) pointed out in the session at SSP that I moderated, the Leiden manifesto serves as a note of caution ..."

Link:

http://scholarlykitchen.sspnet.org/2015/09/01/altmetrics-and-research-assessment-how-not-to-let-history-repeat-itself/

From feeds:

Open Access Tracking Project (OATP) » abernard102@gmail.com

Tags:

oa.new oa.comment oa.altmetrics oa.impact oa.citations oa.jif oa.leiden_manifesto oa.metrics

Date tagged:

09/02/2015, 07:37

Date published:

09/02/2015, 03:37