European Content Removal Laws Are Scrubbing The Internet Of Completely Legal Content

Techdirt. 2024-06-14

A lot of laws have been passed in Europe that regulate the content American companies can carry. Most of these laws were passed to tamp down on speech that would be otherwise legal in the United States, but not so much in Europe where free speech rights aren’t given the same sort of protections found in the US.

Since most of the larger tech companies maintained overseas offices, they were subject to these laws. Those laws targeted everything from terrorist-related content to “hate speech” to whatever is currently vexing legislators. Attached to these mandates were hefty fines and the possibility of being asked to exit these countries completely.

Of course, the most important law governing content takedown demands was passed much, much earlier. I’m not talking about the CDA and Section 203 immunity. No, it’s a law that required no input from legislators or lobbyists.

The law of unintended consequences has been in full force since the beginning of time. But it’s never considered to be part of the legislative process, despite hundreds of years of precedence. So, while the consequences are unintended, they should definitely be expected. Somehow, they never are.

And that brings us to this report [PDF] from The Future of Free Speech, a non-partisan think tank operating from the friendly confines of Vanderbilt University in Tennessee. (h/t Reason)

Legislators in three European countries have made many content-related demands of social media services over the past decade-plus. The end result, however, hasn’t been the eradication of “illegal” content, so much as it has been the eradication of speech that does not run afoul of this mesh network of takedown-focused laws.

When you demand communication services respond quickly to vaguely written laws, the expected outcome is exactly what’s been observed here: the proactive removal of content, a vast majority of which doesn’t violate any of the laws these services are attempting to comply with.

This analysis found that legal online speech made up most of the removed content from posts on Facebook and YouTube in France, Germany, and Sweden. Of the deleted comments examined across platforms and countries, between 87.5% and 99.7%, depending on the sample, were legally permissible.

Equally unsurprising is this breakdown of the stats, which notes that Germany’s content removal laws (which have been in place longer and are much more strict due to its zero-tolerance approach to anything Nazi-adjacent) tend to result in highest percentage of collateral damage.

The highest proportion of legally permissible deleted comments was observed in Germany, where 99.7% and 98.9% of deleted comments were found to be legal on Facebook and YouTube, respectively. This could reflect the impact of the German Network Enforcement Act (NetzDG) on the removal practices of social media platforms which may over-remove content with the objective of avoiding the legislation’s hefty fine. In comparison, the corresponding figures for Sweden are 94.6% for both Facebook and YouTube. France has the lowest percentage of legally permissible deleted comments, with 92.1% of the deleted comments in the French Facebook sample and 87.5% of the deleted comments French YouTube sample.

This isn’t just a very selective sampling of content likely to be of interest to the three countries examined in this report. Nearly 1.3 million YouTube and Facebook comments were utilized for this study. It’s a relatively microscopic in terms of comments generated daily by both platforms but large enough (especially when restrained to three European countries) to determine content removal patterns.

The researchers discovered that more than half the comments removed by these platforms under these countries’ laws were nothing more than the sort of thing that makes the internet world go round, so to speak:

Among the deleted comments, the majority were classified as “general expressions of opinion.” In other words, these were statements that did not contain linguistic attacks, hate speech or illegal content, such as expressing the support for a controversial candidate in the abstract. On average, more than 56% of the removed comments fall into this category.

So, the question is: are these policies actually improving anything? More to the point, are they even achieving the stated goals of the laws? The researchers can’t find any evidence that supports a theory that collateral damage may be acceptable if it helps these governments achieve their aims. Instead, the report suggests things will continue to get worse because the geopolitical environment is in constant flux, which means the goalposts for content moderation are similarly always in motion while the punishments for non-compliance remain unchanged. And that combination pretty much ensures what’s been observed here will only get worse.

[M]oderation of social media is understood by several countries as a delicate balance between freedom of expression, security, and protection of minorities. However, recent events and geopolitical developments could disrupt this perceived balance. National security concerns have caused governments to try to counter misinformation and interference from hostile nations with blunt tools. Additionally, but without making any definitive conclusions, there is some indication that legislation, such as the NetzDG, aimed at strengthening citizens and granting them certain rights, has the unintended effect of encouraging social media platforms to delete a larger fraction of legal comments. This is a preview into the potential impact of the EU’s DSA now in force on freedom of expression.

The report is far kinder in its observations than it probably should be. It says multiple EU governments “understand” that content moderation is a “delicate balance.” That rarely seems to be the case. This report makes it clear that content moderation at scale is impossible. But when companies point this out, regulators tend to view these assertions as flimsy excuses and insist this means nothing more than tech companies just aren’t trying hard enough to meet the (impossible) demands of dozens of laws and hundreds of competing interests.

The takeaway from this report should be abundantly clear. But somehow adherence to the law of unintended consequences is still considered to be a constant flouting of the Unicorns Do Exist laws passed by governments that firmly believe that any decree they’ve issued must be possible to comply with. Otherwise they, in their infinite wisdom, wouldn’t have written it in the first place.