Naturally Selected | The Faculty of 1000 blog
"Last week the San Francisco Declaration on Research Assessment (DORA) launched, with Faculty of 1000 as one of the original 75 scientific organizations to sign the Declaration. DORA is a laudable initiative and anyone with an interest in improving the way we assess the quality and impact of research should read and, if you agree with the proposals, sign it. As DORA states, it is imperative that scientific output is measured accurately and evaluated wisely. The migration of science and science publishing to the web has given us a wealth of new tools to measure usage of published papers – and other products of research such as datasets and software – that are not based on citations. These usage measures are encapsulated in the growing ‘altmetrics’ landscape (for a summary see). F1000Prime recommendations, which provide a machine-readable star rating of papers along with a human-readable comment, are an established non-citation-based metric. An increasing number of publishers and publishing services use our data – including Altmetric, in which F1000Prime is a distinct measure. And we frequently offer F1000Prime data freely for research on metrics. Many journals and publishers, including F1000Research, are developing collections of article-level metrics (ALM) tools. ALMs measure usage and impact of specific papers, and can include citations, although simply displaying article download information is a form of ALMs. Indeed, the first commercial open access publisher BioMed Central, founded by F1000 Chairman Vitek Tracz, has always made article download statistics available. Importantly, DORA is not just about derailing the Impact Factor. The Impact Factor is not a completely meaningless metric, but it needs to be used appropriately. Studies have shown the Impact Factor to have some value in assessing the quality of journals. Where it falls down – badly – is in the judgement of individuals and individual papers. This is what DORA is about – using better more appropriate tools to judge impact, and using these tools to give us a better understanding of the true value of the research that the tax-paying public largely fund. New, non-citation-based metrics are not perfect either, but they greatly enrich the data we can use to better understand scientific impact. Put another way, all measures of research impact have limitations – and appropriate and inappropriate uses – and unknowns. We should recognise the limitations of all metrics and our own knowledge, drop the ‘alternative’, and just call them research metrics. We should also recognise that research metrics are often surrogates for impact and influence that can be more difficult to measure ..."