Musings about the Open Science Prize | FORCE11

lterrat's bookmarks 2017-01-29

Summary:

"... for evaluation of the open science projects, I applied the rubric we described in our response to the RFI, but with additional considerations throughout relating to the PLOS data science collection curation, and trying to take into account advances since the open science prize project began (since some projects were preexisting and backed by other funds/projects, where others were brand new). I note that this is as much an evaluation of the rubric as it is of the projects themselves.

I purposefully did not watch any of the videos explaining the projects on the Open Science Prize website before performing the evaluation.  I wanted to determine how well the projects themselves related their goals, content, and functionality. As a potential user of the data, I aimed to evaluate the ease of navigating the data and its access and reuse directly. Most importantly, I wanted to avoid bias where the real distinctions between projects might be obscured by video production quality, rather than highlighting each project’s genuine values and their differences. It would be all too easy to create a great video about a great idea, and then not implement a quality platform based on strong open science principles, such as open code and data access, or the FAIR+ principles: Findable, Accessible, Interoperable, and Reusable,Traceable, Licensed, and Connected.

One might ask, why bother? The first reason was I wanted to determine how well the preliminary rubric we laid out in our response to the RFI might work in the real world, as we plan to write a more thorough proposal for knowledgebase/data repository evaluation in the future. The second reason is that I simply wanted the evaluation of these projects to inform the future development of the open science projects I work most on, such as the Monarch Initiative (genotype-phenotype data aggregation across species for diagnostics and mechanism discovery), Phenopackets (a new standard for exchanging computable phenotype data for any species in any context), and OpenRIF (computable representation of scholarly outputs and contribution roles to better credit non-traditional scientists). How can we all do better and learn from the Open Science competition? In other words, such a competition shouldn’t just be about the six finalists, but rather it should inform how we all go about practicing open science in general.

So now you are probably wondering, which project(s) did I vote for? Well, that is for you to infer. As you review the musings below, consider your own values for what constitutes robust open science. Comments and corrections entirely welcome."

Link:

https://www.force11.org/blog/musings-about-open-science-prize

From feeds:

Open Access Tracking Project (OATP) » lterrat's bookmarks

Tags:

oa.discoverability oa.libre

Date tagged:

01/29/2017, 22:13

Date published:

01/29/2017, 17:13