Europe's Digital Services Act: On a Collision Course With Human Rights
Deeplinks 2021-10-27
Summary:
Last year, the EU introduced the Digital Services Act (DSA), an ambitious and thoughtful project to rein in the power of Big Tech and give European internet users more control over their digital lives. It was an exciting moment, as the world’s largest trading bloc seemed poised to end a string of ill-conceived technological regulations that were both ineffective and incompatible with fundamental human rights.
We were (cautiously) optimistic, but we didn’t kid ourselves: the same bad-idea-havers who convinced the EU to mandate over-blocking, under-performing, monopoly-preserving copyright filters would also try to turn the DSA into yet another excuse to subject Europeans’ speech to automated filtering.
We were right to worry.
The DSA is now steaming full-speed-ahead on a collision course with even more algorithmic filters - the decidedly unintelligent “AIs” that the 2019 Copyright Directive ultimately put in charge of 500 million peoples’ digital expression in the 27 European member-states.
Copyright filters are already working their way into national law across the EU as each country implements the 2019 Copyright Directive. Years of experience have shown us that automated filters are terrible at spotting copyright infringement, both underblocking (permitting infringement to slip through) and overblocking (removing content that doesn’t infringe copyright) - and filters can be easily tricked by bad actors into blocking legitimate content, including (for example) members of the public who record their encounters with police officials.
But as bad as copyright filters are, the filters the DSA could require are far, far worse.
The Filternet, Made In Europe
Current proposals for the DSA, recently endorsed by an influential EU Parliament committee, would require online platforms to swiftly remove potentially illegal content. One proposal would automatically make any “active platform” potentially liable for the communications of its users. What’s an active platform? One that moderates, categorizes, promotes or otherwise processes its users’ content. Punishing services that moderate or classify illegal content is absurd - these are both responsible ways to approach illegal content.
These requirements give platforms the impossible task of identifying illegal content in realtime, at speeds no human moderator could manage - with stiff penalties for guessing wrong. Inevitably, this means more automated filtering - something the platforms often boast about in public, even as their top engineers are privately sending memos to their bosses saying that these systems don’t work at all.
Large platforms will overblock, removing content according to the fast-paced, blunt determinations of an algorithm, while appeals for the wrongfully silenced will go through a review process that, like the algorithm, will be opaque and arbitrary. That review will also be slow: speech will be removed in an instant, but only reinstated after days, or weeks,or 2.5 years.
But at least the largest platforms would be able to comply with the DSA. It’s far worse for small services, run by startups, co-operatives, nonprofits and other organizations that want to support, not exploit, their
Link:
https://www.eff.org/deeplinks/2021/10/europes-digital-services-act-collision-course-human-rights-0From feeds:
Fair Use Tracker » DeeplinksCLS / ROC » Deeplinks