General Monitoring is not the Answer to the Problem of Online Harms

Deeplinks 2022-08-16

Summary:

Even if you think that online intermediaries should be more proactive in detecting, deprioritizing, or removing certain user speech, the requirements on intermediaries to review all content before publication—often called “general monitoring” or “upload filtering”—raises serious human rights concerns, both for freedom of expression and for privacy.

General monitoring is problematic both when it is directly required by law and when, though not required, it is effectively mandatory because the legal risks of not doing it are so great. Specifically, these indirect requirements incentivize platforms to proactively monitor user behaviors, filter and check user content, and remove or locally filter anything that is controversial, objectionable, or potentially illegal to avoid legal responsibility. This inevitably leads to over censorship of online content as platforms seek to avoid liability for failing to act “reasonably” or remove user content they “should have known” was harmful.

Whether directly mandated or strongly incentivized, general monitoring is bad for human rights and for users. 

  • As the scale of online content is so vast, general monitoring commonly uses automated decision-making tools that reflect the dataset’s biases and lead to harmful profiling.
  • These automated upload filters are prone to error, are notoriously inaccurate, and tend to overblock legally protected expressions.
  • Upload filters also contravene the foundational human rights principles of proportionality and necessity by subjecting users to automated and often arbitrary decision-making.
  • The active observation of all files uploaded by users has a chilling effect on freedom of speech and access to information by limiting the content users can post and engage with online.
  • A platform reviewing every user post also undermines users privacy rights by providing companies, and thus potentially government agencies, with abundant data about users. This is particularly threatening to anonymous speakers.
  • Pre-screening can lead to enforcement overreach, fishing expeditions (undue evidence exploration), and data retention.
  • General monitoring undermines the freedom to conduct business, adds compliance costs, and undermines alternative platform governance models.
  • Monitoring technologies are even less effective at small platforms, which don’t have the resources to develop sophisticated filter tools. General monitoring thus cements the gatekeeper role of a few power platforms and further marginalizes alternative platform governance models.

We have previously expressed concern about governments employing more aggressive and heavy-handed approaches to intermediary regulation, with policymakers across the globe calling on platforms to remove allegedly legal but ‘undesirable’ or ‘harmful’ content from their sites, while also expecting platforms to detect and remove illegal content. In doing so, states fail to protect fundamental freedom of expression rights and fall short of their obligations to ensure a free online environment with no undue restrictions on legal content, whilst also restricting the rights of users to share and receive impartial and unfiltered information. This has a chilling effect on the individual right to free speech wherein users change t

Link:

https://www.eff.org/deeplinks/2022/08/general-monitoring-not-answer-problem-online-harms

From feeds:

Fair Use Tracker » Deeplinks
CLS / ROC » Deeplinks

Tags:

blocking

Authors:

Paige Collings, David Greene

Date tagged:

08/16/2022, 03:40

Date published:

08/16/2022, 03:21