Six principles for assessing scientists for hiring, promotion, and tenure | Impact of Social Sciences
ab1630's bookmarks 2018-06-10
"The negative consequences of relying too heavily on metrics to assess research quality are well known, potentially fostering practices harmful to scientific research such as p-hacking, salami science, or selective reporting. The “flourish or perish” culture defined by these metrics in turn drives the system of career advancement in academia, a system that empirical evidence has shown to be problematic and which fails to adequately take societal and broader impact into account. To address this systemic problem, Florian Naudet, John P. A. Ioannidis, Frank Miedema, Ioana A. Cristea, Steven N. Goodman and David Moher present six principles for assessing scientists for hiring, promotion, and tenure. Academic medical institutions, when hiring or promoting faculty who they hope will move science forward in impactful ways, are confronted with a familiar problem: it is difficult to predict whose scientific contribution will be greatest, or meet an institution’s values and standards. Some aspects of a scientist’s work are easily determined and quantified, like the number of published papers. However, publication volume does not measure “quality”, if by quality we mean substantive, impactful science that addresses valuable questions and is reliable enough to build upon. Recognising this, many institutions augment publication numbers with measures they believe better capture the scientific community’s judgement of research value. The journal impact factor (JIF) is perhaps the best known and most widely used of such metrics. The JIF in a given year is the average number of citations to research articles in that journal over the preceding two years. Like publication numbers, it is easy to measure but may fail to capture what an institution values. For instance, in Rennes 1 University, the faculty of medicine’s scientific assessment committee evaluates candidates with a “mean” impact factor (the mean of JIF of all published papers) and hiring of faculty requires publications in journals with the highest JIF. They attempt to make this more granular by also using a score that attempts to correct the JIF for research field and author rank. (This score, “Système d’Interrogation, de Gestion et d’Analyse des Publications Scientifiques”, SIGAPS, is not publicly available). In China, Qatar, Saudi Arabia, among other countries, scientists receive monetary bonuses for research papers published in high-JIF journals, such as Nature....
To address these issues, the Meta-research Innovation Center at Stanford (METRICS) convened a one-day workshop in January, 2017 in Washington DC, to discuss and propose strategies to hire, promote, and tenure scientists. It was comprised of 22 people who represented different stakeholder groups from several countries (deans of medicine, public and foundation funders, health policy experts, sociologists, and individual scientists).
The outcomes of that workshop were summarised in a recent perspective, in which we described an extensive but non-exhaustive list of current proposals aimed at aligning assessments of scientists with desirable scientific behaviours. Some large initiatives are gaining traction. For instance, the San Francisco Declaration on Research Assessment (DORA) has been endorsed by thousands of scientists and hundreds of academic institutions worldwide. It advocates “a pressing need to improve how scientific research is evaluated, and asks scientists, funders, institutions and publishers to forswear using JIFs to judge individual researchers”. Other proposals are still just ideas without any implementation yet...."