New Study: People Have A Negative View Of Advertisers Who Still Advertise On Platforms That Allow Hate Speech
Ars Technica 2023-09-14
One of the things we’ve tried to get across over the years (perhaps unsuccessfully), is that not only are laws to get rid of hate speech almost always abused, they’re also counterproductive in the actual fight against hate. For those who support those laws, they seem to think that without them, that means that there is nothing at all that can be done about “hate speech.” But that’s false. There are all sorts of ways to actually combat hate speech, and part of that is in making it socially and economically unacceptable.
For years, people have kept insisting that social media companies have “no incentive” to keep hate speech off of their platforms, and for years, we’ve explained why that’s wrong. If your platform is overrun with hate speech it’s bad for the platform. Users start to go elsewhere. And if your business model is advertising, so do the advertisers.
And now we have some empirical evidence to show this. CCIA has released a report on the impact of harmful content on brands and advertising, done through creating surveys of users in hypothetical scenarios on social media where hate speech is and is not moderated Turns out, as we said, if you allow hate speech on your website it drives users and advertisers away (someone should tell Elon). It also makes users think poorly of the advertisers who remain.
In a hypothetical scenario where hate speech was not moderated on social media services, research also found negative implications for brands that advertise on the services when hate speech was viewed. Proximity to content that included hate speech resulted in some respondents reporting that the content made them like the advertiser less. It also resulted in a slight decrease in favorable opinions of the advertiser brand, as well as a larger change in net favorability, with some of the movement shifting from favorable opinions to neutral (i.e., neither favorable nor unfavorable) opinions. Respondents who viewed content with hate speech also reported a lower likelihood of purchasing the advertised brand that directly preceded the content, compared to those respondents who viewed social media content with a positive or neutral tone right after the ad.
The results suggest that consumer sentiment toward a social media service would decline if it did not remove user-generated hate speech, and that consumer sentiment would also decline for brands that advertise on the same platform adjacent to said content. These findings indicate that social media services have a rational incentive to moderate harmful content such as hate speech and are consistent with digital services’ assertions that not all engagement adds value and that, in fact, some engagement is of negative value.
While this particular paper actually seems targeted at responding to laws on the other side of the aisle — such as the contested laws in Texas and Florida that would create “must carry” requirements for certain forms of speech, I think the argument applies equally as well to states like New York and California that are trying to pressure companies with legal mandates to remove such information.
However, a number of “must-carry” bills have been proposed in various jurisdictions that, if enacted, could limit social media services’ ability to remove or deprioritize harmful user-generated content. Two such bills recently became law in Texas and Florida, but are not yet in effect, due to pending consideration by the U.S. Supreme Court. Until this paper, there has been little public-facing research exploring the implications of hypothetical legal requirements that would require social media services to display content that would otherwise violate their current hate speech policies.
The study here is basically highlighting that both types of laws are bad. For Texas and Florida, it’s bad in that it would do real damage to the business models of these companies, because the market (remember when the GOP was supposed to be the party supporting the free market?) is telling websites and advertisers that they don’t want hate speech on their platforms.
As these surveys show, websites moderating hate speech are doing so for perfectly legitimate business reasons (to avoid having users and advertisers flee). It’s not because they’re “woke” or trying to silence anyone. They’re just trying to keep the people on their platform from killing each other.
And, the study is also suggesting that the laws in California and New York don’t help either, as the companies have financial incentives to avoid platforming hate speech as well. They don’t need a law to come in and tell them this. The market actually functions just fine as a motivator.