Why open-source generative AI models are an ethical way forward for science

Items tagged with oa.ai in Open Access Tracking Project (OATP) 2023-04-19

Summary:

"From my perspective as a political and data scientist who is using and teaching about such models, scholars should be wary. The most widely touted LLMs are proprietary and closed: run by companies that do not disclose their underlying model for independent inspection or verification, so researchers and the public don’t know on which documents the model has been trained. The rush to involve such artificial-intelligence (AI) models in research is a problem. Their use threatens hard-won progress on research ethics and the reproducibility of results. Instead, researchers need to collaborate to develop open-source LLMs that are transparent and not dependent on a corporation’s favours. It’s true that proprietary models are convenient and can be used out of the box. But it is imperative to invest in open-source LLMs, both by helping to build them and by using them for research. I’m optimistic that they will be adopted widely, just as open-source statistical software has been. Proprietary statistical programs were popular initially, but now most of my methodology community uses open-source platforms such as R or Python...."

Link:

https://www.nature.com/articles/d41586-023-01295-4

From feeds:

[IOI] Open Infrastructure Tracking Project » Items tagged with oa.ai in Open Access Tracking Project (OATP)
[IOI] Open Infrastructure Tracking Project » Items tagged with oa.floss in Open Access Tracking Project (OATP)
Open Access Tracking Project (OATP) » peter.suber's bookmarks

Tags:

oa.tools oa.ethics reproducibility open_source_software artificial_intelligence

Date tagged:

04/19/2023, 14:56

Date published:

04/19/2023, 10:58