Systematizing Confidence in Open Research and Evidence (SCORE) | SocArXiv Papers

flavoursofopenscience's bookmarks 2021-05-08

Summary:

Alipourfard, Nazanin, Beatrix Arendt, Daniel J. Benjamin, Noam Benkler, Michael M. Bishop, Mark Burstein, Martin Bush, et al. 2021. “Systematizing Confidence in Open Research and Evidence (SCORE).” SocArXiv. May 4. doi:10.31235/osf.io/46mnb.


Assessing the credibility of research claims is a central, continuous, and laborious part of the scientific process. Credibility assessment strategies range from expert judgment to aggregating existing evidence to systematic replication efforts. Such assessments can require substantial time and effort. Research progress could be accelerated if there were rapid, scalable, accurate credibility indicators to guide attention and resource allocation for further assessment. The SCORE program is creating and validating algorithms to provide confidence scores for research claims at scale. To investigate the viability of scalable tools, teams are creating: a database of claims from papers in the social and behavioral sciences; expert and machine generated estimates of credibility; and, evidence of reproducibility, robustness, and replicability to validate the estimates. Beyond the primary research objective, the data and artifacts generated from this program will be openly shared and provide an unprecedented opportunity to examine research credibility and evidence.

Link:

https://doi.org/10.31235/osf.io/46mnb

From feeds:

Open Access Tracking Project (OATP) » flavoursofopenscience's bookmarks

Tags:

oa.new oa.open_science oa.credibility oa.assessment oa.score

Date tagged:

05/08/2021, 14:22

Date published:

05/08/2021, 10:22