FAIRly big: A framework for computationally reproducible processing of large-scale data | bioRxiv

peter.suber's bookmarks 2022-02-14

Summary:

Abstract:  Large-scale datasets present unique opportunities to perform scientific investigations with un-precedented breadth. However, they also pose considerable challenges for the findability, accessibility, interoperability, and reusability (FAIR) of research outcomes due to infrastructure limitations, data usage constraints, or software license restrictions. Here we introduce a DataLad-based, domain-agnostic framework suitable for reproducible data processing in compliance with open science mandates. The framework attempts to minimize platform idiosyncrasies and performance-related complexities. It affords the capture of machine-actionable computational provenance records that can be used to retrace and verify the origins of research outcomes, as well as be re-executed independent of the original computing infrastructure. We demonstrate the framework’s performance using two showcases: one highlighting data sharing and transparency (using the studyforrest.org dataset) and another highlighting scalability (using the largest public brain imaging dataset available: the UK Biobank dataset).

 

Link:

https://www.biorxiv.org/content/10.1101/2021.10.12.464122v2

From feeds:

Open Access Tracking Project (OATP) » peter.suber's bookmarks

Tags:

oa.new oa.data oa.fair oa.reproducibility oa.platforms oa.standards oa.recommendations

Date tagged:

02/14/2022, 09:43

Date published:

02/14/2022, 04:43