Large volumes of data are challenging open science - SciDev.Net

abernard102@gmail.com 2014-05-28

Summary:

" ... Self-correction was recently exemplified when a beam of neutrinos fired from CERN (the European Organization for Nuclear Research) to a laboratory 730 kilometres away seemed to travel faster than the speed of light. The detailed results were made openly available, resulting in the discovery of a timing error and a repeat experiment that respected the universal speed limit. But the ‘data explosion’ of the past 20 years poses severe challenges to the principle of self-correction. The volume and complexity of the data that can be acquired, stored and manipulated, coupled with ubiquitous technologies for instant communication, have created a flood of data — 90 per cent of all data were generated in the last two years. [1] ... The data explosion has also created unprecedented opportunities for scientific discovery that some have argued place us on the verge of another scientific revolution. The opportunities lie in the potential of large volumes of complex data, integrated from different sources, to reveal deeper, previously hidden relationships in phenomena. For example, identifying varying patterns in the human genome carries great potential in areas such as personalised medicine. But to seize the opportunities and address the problems created by the data explosion, scientists need to recognise the essential attributes of scientific openness. This is because openness in itself has no value unless it is, as a 2012 Royal Society report calls it, “intelligent openness”. [3] This means that published data should be accessible (can they be readily located?), intelligible (can they be understood?), assessable (can their source and reliability be evaluated?) and reusable (do the data have all the associated information required for reuse?). Scientific claims and concepts that are published without access to the data on which they are based in ways that satisfy these norms are the equivalents of adverts for the product rather than the product itself ... In practice, open data depends on a willingness to share data, which can be both highly efficient and highly creative. There was a powerful example of open data in action in May 2011, when a severe gastrointestinal infection spread rapidly from Hamburg, Germany. The laboratory dealing with the outbreak shared data and samples with bioinformatics groups on four continents, leading to assembly of the offending microorganism’s genome within 24 hours. The analyses provided crucial information in time to help contain the outbreak ... Publicly funded scientists need to regard the data that they acquire as held in trust on behalf of society and abandon the view that the data are theirs. Funders of public research need to mandate open data as a condition of funding. Scientific publishers need to require concurrent publication of the data on which articles are based in electronic databases. And universities and research institutes need to recognise the importance of intelligent openness to the future of science ..."

Link:

http://m.scidev.net/global/data/opinion/large-volumes-of-data-are-challenging-open-science.html

From feeds:

Open Access Tracking Project (OATP) » abernard102@gmail.com

Tags:

oa.new oa.comment oa.data oa.open_science oa.standards oa.best_practices

Date tagged:

05/28/2014, 07:36

Date published:

05/28/2014, 03:36