Generative AI Policy Must Be Precise, Careful, and Practical: How to Cut Through the Hype and Spot Potential Risks in New Legislation
Deeplinks 2023-07-07
Summary:
Anxiety about generative AI is growing almost as fast as the use of the technology itself, fueled by dramatic rhetoric from prominent figures in tech, entertainment, and national security. Something, they suggest, must be done to stave off any number of catastrophes, from the death of the artist to the birth of new robot overlords.
Given the often hyperbolic tone, it might be tempting (and correct) to dismiss much of this as the usual moral panic new technologies provoke, or self-interested hype. But there are legitimate concerns in the mix, too, that may require some rules of the road. If so, policymakers should answer some important questions before crafting or passing on those rules. As always, the devil is in the details, and EFF is here to help you sort through them to identify solid strategies and potential collateral damage.
First, policymakers should be asking whether the new legislation is both necessary and properly focused. Generative AI is a category of general-purpose tools with many valuable uses. For every image that displaces a potential low-dollar commission for a working artist, there are countless more that don’t displace anyone’s living—images created by people expressing themselves or adding art to projects that would simply not have been illustrated. Remember: the major impact of automated translation technology wasn’t displacing translators—it was the creation of free and simple ways to read tweets and webpages in other languages when a person would otherwise not know what was being said.
Sometimes we don’t need a new law— we just need to do a better job with the ones we already have
But a lot of the rhetoric we are hearing ignores those benefits and focuses solely on potentially harmful uses, as if the tool itself were the problem (rather than how people use it). The ironic result: a missed opportunity to pass and enforce laws that can actually address those harms.
For example, if policymakers are worried about privacy violations stemming from the collection and use of images and personal information in generative AI, a focus on the use rather than the tool could lead to a broader law: real and comprehensive privacy legislation that covers all corporate surveillance and data use. Ideally, that law would both limit the privacy harms of AI (generative and other forms) and be flexible enough to stay abreast of new technological developments.
But sometimes we don’t need a new law— we just need to do a better job with the ones we already have. If lawmakers are worried about misinformation, for example, they might start by reviewing (and, where necessary, strengthening) resources for enforcement of existing laws on fraud and defamation. It helps that courts have spent decades assessing those legal protections and balancing them against countervailing interests (such as free expression); it makes little sense to relitigate those issues for a specific technology. And where existing regulations are truly ineffective in other contexts, proponents must explain why they will be more effective against misuses of generative AI.
Second, are the harms the proposal is supposed to alleviate documented or still speculative? For example, for years lawmakers (and others) have raised alarms about the mental health effects of social media use. But there’s little research to prove it, which makes it difficult to tailor a regulatory response. In other areas, such as the encryption debate, we’re seen how law enforcement’s hypothetical or unproven concerns,
Link:
https://www.eff.org/deeplinks/2023/07/generative-ai-policy-must-be-precise-careful-and-practical-how-cut-through-hypeFrom feeds:
Fair Use Tracker » DeeplinksCLS / ROC » Deeplinks