Machine learning, metrics & merit: the future of research assessment, 12 Dec 2022 at 2pm (GMT) | Eventbrite

flavoursofopenscience's bookmarks 2022-11-29


About this event

The use of quantitative indicators and metrics in research assessment continues to generate a mix of enthusiasm, hostility and critique. To these possibilities, we can add growing interest in uses of machine learning and artificial intelligence (AI) to automate assessment processes, and reduce the cost and bureaucracy of conventional methods of peer and panel-based review.

Novel methods also bring potential pitfalls, uncertainties and dilemmas, and may operate in some tension with moves towards responsible research assessment, as reflected in the Declaration on Research Assessment (DORA) and the new Coalition for Advancing Research Assessment (CoARA).

As the UK again reviews its approach to research assessment and the design of the Research Excellence Framework (REF), these and other issues are up for discussion through the Future Research Assessment Programme (FRAP), initiated by the four UK higher education funding bodies.

On Monday 12 December, in partnership with Research England, the Research on Research Institute (RoRI) will hold an online workshop to launch two new studies that should make significant contributions to the FRAP process.

The first, led by Professor Mike Thelwall, is a ground-breaking analysis of whether one could run a REF exercise using AI. The second is an updated review of the role of metrics in the UK research assessment system, which builds on the 2015 review,The Metric Tide, which called for responsible approaches to the use of metrics, and cautioned against purely metric-based approaches to assessment. For more on these studies, see recent articles in Nature, Research Professional and Times Higher Education.

We are also delighted to be joined by Professor Dame Jessica Corner, new Executive Chair of Research England—who will offer opening keynote remarks—and by two panels of UK and international experts. Please join us on 12 December if you are interested in what happens next to the REF, and in broader international debates about the future of research assessment.


14:00 Welcome and introductionsJames Wilsdon, Director, RoRI (Chair)

14:05 The future of research assessment— opening keynote remarks by Professor Dame Jessica Corner, Executive Chair, Research England

14:15 Can REF output quality scores be assigned by AI?—Mike Thelwall, Professor of Information Science and leader of the Statistical Cybermetrics Research Group, University of Wolverhampton (20 min presentation; 10 min Q&A)

14:45 PANEL: AI and the automation of evaluation

Professor Emma Flynn, Pro Vice-Chancellor for Research and Enterprise, Queens University Belfast

Ludo Waltman, Professor of Quantitative Science Studies, CWTS, Leiden & Associate Director of RoRI

Gustav Petersson, Senior Analyst, Swedish Research Council

Juan Mateos-Garcia, formerly Director of Data Analytics at Nesta (now DeepMind)

15:30 Tea break 

15:40 Harnessing the Metric Tide: an updated review of the role of metrics and indicators (20 min presentation; 10 min Q&A)

Stephen Curry, Professor of Structural Biology and Assistant Provost for equality, diversity and inclusion at Imperial College London and Chair, DORA Steering Committee

Elizabeth Gadd, Research Policy Manager, Loughborough University and Chair of the International Network of Research Management Societies, Research Evaluation Group

16:10 PANEL: Where next for responsible research assessment?

Karen Stroobants, Royal Society of Chemistry & Coalition for Advancing Research Assessment (CoARA)

Helen Cross, Director of Research & Innovation, Scottish Funding Council* (TBC)

Andrew Jack, Global education editor, Financial Times

Jon Holm, Special Advisor, Research Council of Norway

17:05 Closing remarks and next steps

Sir Peter Gluckman Chair, FRAP International Advisory Group (video message)

Dr Steven Hill, Director of Research, Research England


From feeds:

Open Access Tracking Project (OATP) » flavoursofopenscience's bookmarks

Tags: oa.rori oa.assessment oa.metrics oa.peer_review

Date tagged:

11/29/2022, 08:19

Date published:

11/29/2022, 03:19