Platform regulation should focus on transparency, not content.

amarashar's bookmarks 2020-12-03

Summary:

A collage of the logos of Snap, Google, Twitter, TikTok, Instagram, and Facebook Denis Charlet/Getty Images Free Speech Project This article is part of the Free Speech Project, a collaboration between Future Tense and the Tech, Law, & Security Program at American University Washington College of Law that examines the ways technology is influencing how we think about speech. Despite efforts by digital platforms to curb the tsunami of disinformation surrounding U.S. elections, cyberspace remains awash in conspiracy theories and democracy-damaging disinformation. Meanwhile, terrorist attacks in France and Austria have spurred European efforts to clamp down on hatred and incitement to violence online. On Dec. 15, the European Commission is slated to release a draft set of comprehensive platform regulations. These European rules could become the standard for the global net—leaving the U.S. behind. We have seen this before. American policymakers sat on the sidelines while the EU enacted its General Data Protection Regulation, which has become the de facto global standard. If America wants to help shape the rules of the road governing online discourse, it must step up and engage now. What if, instead of pursuing conflicting paths, Europeans and Americans collaborated on a good governance framework for online platforms, adaptable for different legal systems and societal norms? Right now, President-elect Joe Biden is setting his administration’s policy agenda and selecting personnel. Platform governance undoubtedly is on the table. By expressly endorsing trans-Atlantic collaboration on a digital framework, Biden would underscore his commitment to the trans-Atlantic alliance and ensure that American voices are heard. Similarly, Europeans could signal that they would welcome American engagement in developing the rules of the road for digital networks. Government regulation of online harms is a daunting challenge. Not all toxic content is illegal, and lawmakers must tread carefully to avoid infringing on free expression and due process. And while the platforms enjoy their own free speech rights to set and enforce standards for their online communities, they also must respect widely recognized free expression exclusions for illegal content such as child pornography and incitement to violence. There are two regulatory methods to curtail online harms that might constrain freedom of expression. The first is eliminating the platform’s safe harbor from liability for user content, which is exactly what multiple proposals currently before Congress would do. Section 230 of the Communications Decency Act protects platforms from lawsuits over third-party posts, and it has become a target of both the left and the right. The bills take polar opposite stands on the problem and the solution, whipsawing platforms between demands from the left that they remove blatantly false or manipulated speech and allegations from the right that conservative voices are deliberately censored. (On Tuesday evening, President Trump tweeted that he would veto the National Defense Authorization Act if Congress doesn’t repeal Section 230.) If online companies become liable for content that users post, platforms could well choose to eliminate popular services featuring user-generated content. The second method is requiring immediate removal of specific kinds of content or face stiff penalties, as with Germany’s NetzDG, which incentivizes platforms to delete questionable yet legal content. This approach deputizes companies to adjudicate the legality of content without affording users judicial redress. (Indeed, the French Constitutional Council struck down a similar law for violating free expression.) Ominously, such laws also provide cover to authoritarian regimes to expand categories of speech subject to censorship. As a Chinese academic once proudly intoned, “there is no hate speech on our internet.” But it is possible to tackle hate speech and disinformation without trampling on free expression, if the U.S. and Europe work together: by mandating transparency—with accountability—instead of regulating content. Require social media companies to provide greater transparency of their content moderation rules and procedures, including how their algorithms influence what users see, and enforce these disclosures through robust oversight. (Such a regulatory approach would complement, not replace, platform competition and privacy laws.)

Link:

https://slate.com/technology/2020/12/platform-regulation-european-commission-transparency.html

From feeds:

Harmful Speech » amarashar's bookmarks

Tags:

harmfulspeech

Date tagged:

12/03/2020, 14:16

Date published:

12/03/2020, 09:16