New Research Indicators and their meaning for Open Science

peter.suber's bookmarks 2022-11-28


"Summary of the argument 1. Why is EVALUATION such an issue in OPEN SCIENCE? § Incentives and rewards structures seen as barriers preventing researchers‘ engagement with OS. – Elitist peer reviews & indicators (jour. hierarchy & JIFs) closing OS activities 2. Demands and expectations for new indicators to improve evaluation § Altmetrics (e.g. twitter, news, policy mentions), Open Acces, Open Data stats § Promises of universal indicators break down: – Research is diverse -- it cannot be discribed with general indicators (this may lead to goal displacement, task reduction and gaming) – Counting outputs does not necessarily reflect qualitieS 3. Indicator frameworks: towards plural and conditional assessment § Evaluation processes should depend on and take into account – Missions, eval. goals, assessment levels, espistemic cultures, stakeholders and environments of research – Focus on processes of knowledge exchange and capabilities with qualitative indicators..."



11/28/2022, 08:33

From feeds:

Open Access Tracking Project (OATP) » peter.suber's bookmarks


oa.slides oa.open_science oa.metrics oa.assessment oa.pids oa.infrastructure oa.interoperability oa.jif oa.recommendations oa.prestige oa.quality oa.metadata oa.i4oc oa.cwts oa.os_indicators

Date tagged:

11/28/2022, 13:31

Date published:

12/05/2018, 08:33