Five Decisions Illustrate How Section 230 Is Fading Fast

Techdirt. 2024-10-18

Professor Eric Goldman continues to be the best at tracking any and all developments regarding internet regulations. He recently covered a series of cases in which the contours of Section 230’s liability immunity is getting chipped away in all sorts of dangerous ways. As it’s unlikely that I would have the time to cover any of these cases myself, Eric has agreed to let me repost it here. That said, his post is written for an audience that already understands Section 230 and nuances related to it, so be aware that it doesn’t go as deep into the details. If you’re just starting to understand Section 230, here’s a good place to start, though, as Eric notes, the old knowledge may be increasingly less important.

Section 230 cases are coming faster than I can blog them. This long blog post rounds up five defense losses, riddled with bad judicial errors. Given the tenor of these opinions, how are any plaintiffs NOT getting around Section 230 at this point?

District of Columbia v. Meta Platforms, Inc., 2024 D.C. Super. LEXIS 27 (D.C. Superior Ct. Sept. 9, 2024)

The lawsuit alleges Meta addicts teens and thus violates DC’s consumer protection act. Like other cases in this genre, it goes poorly for Facebook.

Section 230

The court distills and summarizes the conflicting precedent: “The immunity created by Section 230 is thus properly understood as protection for social media companies and other providers from “intermediary” liability—liability based on their role as mere intermediaries between harmful content and persons harmed by it…. But-for causation, however, is not sufficient to implicate Section 230 immunity….Section 230 provides immunity only for claims based on the publication of particular third-party content.”

I don’t know what “particular” third-party content means, but the statute doesn’t support any distinction based on “particular” and “non-particular” third-party content. It refers to information provided by another information content provider, which divides the world into first-party content and third-party content. Section 230 applies to all claims based on third-party content, whether that’s an individual item or the entire class.

Having manufactured the requirement of that the claim must be based on “particular” content to trigger Section 230, the court says none of the claims do that.

With respect to the deceptive omissions claims, Section 230 doesn’t apply because “Meta can simply stop making affirmative misrepresentations about the nature of the third-party content it publishes, or it can disclose the material facts within its possession to ensure that its representations are not misleading or deceptive within the meaning of the CPPA.”

With respect to a different deceptive omissions claim, the court says Facebook “could avoid liability for such claims in the future without engaging in content moderation. It could disclose the information it has about the prevalence of sexual predators operating on its platforms, and it could take steps to block adult strangers from contacting minors over its apps.” I’d love for the court to explain how blocking users from contacting each other on apps differs from “content moderation.”

With respect to yet other deceptive omissions claims, the court says “If the claim seeks to hold Meta liable for omissions that make its statements about eating disorders misleading, then, as with the omissions regarding the prevalence of harmful third-party content on Meta’s platforms, the claim seeks to hold Meta liable for its own false, incomplete, and otherwise misleading representations, not for its publication of any particular third-party content. If the claim seeks to hold Meta liable for breaching a duty to disclose the harms of its platforms’ features, including the plastic surgery filter, then the claim is based on Meta’s own conduct, not on any third-party content published on its platforms.”

First Amendment

“Meta’s counsel was unable to articulate any message expressed or intended through Meta’s implementation and use of the challenged design features.” The court distinguishes a long list of precedents that it says don’t apply because they “involved state action that interfered with messaging or other expressive conduct—a critical element that is not present in the case before this court.” I don’t see how the court could possibly say that a government agency suing Facebook for not complying with government rules about the design of speech venues isn’t state action that interferes with expressive conduct. (Also, the “expressive conduct” phrase doesn’t apply here. It’s called “publishing”).

The court distinguishes the Moody case:

Deprioritizing content relates to “the organizing and presenting” of content, as do the design features at issue here. But the reason deprioritizing specific content or content providers can be expressive is not that it affects the way content is displayed; it can be expressive because it indicates the provider’s relative approval or disapproval of certain messages.

I don’t understand how the court can acknowledge that Facebook’s design features relate to the “organizing and presenting” of content and still not say those features are not expressive.

The court continues with its odd reading of Moody:

The Supreme Court, moreover, expressly limited the reach of its holding in Moody to algorithms and other features that broadly prioritize or deprioritize content based on the provider’s preferences, and it emphasized that it was not deciding whether the First Amendment applies to algorithms that display content based on the user’s preferences

Huh? Every algorithm encodes the “provider’s preferences.” If the court is trying to say that Facebook didn’t intend to preference harmful content, that ignores the inevitability that the algorithm will make Type I/Type II errors. The court sidesteps this:

the District’s unfair trade practice claims challenge Meta’s use of addictive design features without regard to the content Meta provides, and Meta has failed to articulate even a broad or vague message it seeks to convey through the implementation of its design features. So although regulations of community norms and standards sometimes implicate expressive choices, the design features at issue here do not.

Every “design feature” implicates expressive choices. Perhaps Facebook should have done a better job articulating this, but the judge was far too eager to disrespect the editorial function.

The court adds that if the First Amendment applied, the enforcement action will be subject to, and survive, intermediate scrutiny. “The District’s stated interest in prosecuting its claims is the protection of children from the significant adverse effects of the addictive design features on Meta’s social media platforms. The District’s interest has nothing to do with the subject matter or viewpoint of the content displayed on Meta’s platforms; indeed, the complaint alleges that the harms arise without regard to the content served to any individual user. ”

It’s impossible to say with a straight face that the district is uninterested in the subject matter or viewpoint of the content displayed on Meta’s platforms. Literally, other parts of the complaint target specific subject matters.

Prima Facie Elements

The court says that the provision of Internet services constitutes a “transfer” for purposes of the consumer protection statute, “even though Meta does not charge a fee for the use of its social media platforms.”

The court says that the alleged health injuries caused by the services are sufficient harm for statutory purposes, even if no one lost money or property.

The court says some of Meta’s public statements may have been puffery, and other statements may not have been issued publicly, but “many of the statements attributed to Meta and its top officials in the complaint are not so patently hyperbolic that it would be implausible for a reasonable consumer to be misled by them. Others are sufficiently detailed, quantifiable, and capable of verification that, if proven false, they could support a deceptive trade practice claim.”

State v. Meta Platforms, Inc., 2024 Vt. Super. LEXIS 146 (Vt. Superior Ct. July 29, 2024)

Similar to the DC case, the lawsuit alleges Meta addicts teens and thus violates Vermont’s consumer protection act. This goes as well for Facebook as it did in DC.

With respect to Section 230, the court says:

Meta may well be insulated from liability for injuries resulting from bullying or sexually inappropriate posts by Instagram users, but the State at oral argument made clear that it asserts no claims on those grounds….

The State is not seeking to hold Meta liable for any content provided by another entity. Instead, it seeks to hold the company liable for intentionally leading Young Users to spend too much time on-line. Whether they are watching porn or puppies, the claim is that they are harmed by the time spent, not by what they are seeing. The State’s claims do not turn on content, and thus are not barred by Section 230.

The State’s deception claim is also not barred by Section 230 for the same reason—it does not depend on third party content or traditional editorial functions. The State alleges that Meta has failed to disclose to consumers its own internal research and findings about Instagram’s harms to youth, including “compulsive and excessive platform use.”  The alleged failure to warn is not “inextricably linked to [Meta’s] alleged failure to edit, monitor, or remove [] offensive content.”

Facebook’s First Amendment defense fails because it “fails to distinguish between Meta’s role as an editor of content and its alleged role as a manipulator of Young Users’ ability to stop using the product. The First Amendment does not apply to the latter.” Thus, the court characterizes the claims as targeting conduct, not content, which only get rational basis scrutiny. “Unlike Moody, where the issue was government restrictions on content…it is not the substance of the speech that is at issue here.”

T.V. v. Grindr, LLC, 2024 U.S. Dist. LEXIS 143777 (M.D. Fla. Aug. 13, 2024)

This is an extremely long (116 pages), tendentious, and very troubling opinion. The case involves a minor, TV, who used Grindr’s services to match with sexual abusers and then committed suicide. The estate sued Grindr for the standard tort claims plus a FOSTA claim. The court dismisses the FOSTA claim but rejects Grindr’s Section 230 defense for the remaining claims. It’s a rough ruling for Grindr and for the Internet generally, twisting many standard industry practices and statements into reasons to impose liability and doing a TAFS-judge-style reimagining of Section 230. Perhaps this ruling will be fixed in further proceedings, or perhaps this is more evidence we are nearing the end of the UGC era.

FOSTA

The court dismissed the FOSTA claim:

T.V., like the plaintiffs in Red Roof Inns, fails to allege facts to make Grindr’s participation in a sex trafficking venture plausible. T.V. alleges in a conclusory manner that the venture consisted of recruiting, enticing, harboring, transporting, providing, or obtaining by other means minors to engage in sex acts, without providing plausible factual allegations that Grindr “took part in the common undertaking of sex trafficking.”…, the allegations that Grindr knows minors use Grindr, knows adults target minors on Grindr, and knows about the resulting harms are insufficient.

This is the high-water mark of the opinion for Grindr. It’s downhill from here.

Causation

The court says the plaintiff adequately alleged that Grindr was the proximate cause of TV’s suicide:

reasonable persons could differ on whether Grindr’s conduct was a substantial factor in producing A.V.’s injuries or suicide or both and whether the likelihood adults would engage in sexual relations with A.V. and other minors using Grindr was a hazard caused by Grindr’s conduct

Strict Liability

The court doesn’t dismiss the strict liability claim because the Grinder “service” was a “product.” (The plaintiff literally called Grindr a service). The court says:

Like Lyft in Brookes, Grindr designed the Grindr app for its business; made design choices for the Grindr app; placed the Grindr app into the stream of commerce; distributed the Grindr app in the global marketplace; marketed the Grindr app; and generated revenue and profits from the Grindr app….

Grindr designed and distributed the Grindr app, making Grindr’s role different from a mere service provider, putting Grindr in the best position to control the risk of harm associated with the Grindr app, and rendering Grindr responsible for any harm caused by its design choices in the same way designers of physically defective products are responsible

This is not a good ruling for virtually every Internet service. You can see how problematic this is from this passage:

T.V. is not trying to hold Grindr liable for “users’ communications,” about which the pleading says nothing. T.V. is trying to hold Grindr liable for Grindr’s design choices, like Grindr’s choice to forego age detection tools, and Grindr’s choice to provide an interface displaying the nearest users first

These “design choices” are Grindr’s speech, and they facilitate user-to-user speech. The court’s anodyne treatment of the speech considerations doesn’t bode well for Grindr.

The court says TV adequately pleaded that Grindr’s design choices were “unreasonably dangerous”:

Grindr designed its app so anyone using it can determine who is nearby and communicate with them; to allow the narrowing of results to users who are minors; and to forego age detection tools in favor of a minor-based niche market and resultant increased market share and profitability, despite the publicized danger, risk of harm, and actual harm to minors. At a minimum, those allegations make it plausible that the risk of danger in the design outweighs the benefits.

Remember, this is a strict liability claim, and these alleged “defects” could apply to many UGC services. In other words, the court’s analysis raises the spectre of industry-wide strict liability–an unmanageable risk that will necessarily drive most or all players out of the industry. Uh oh.

Also, every time I see the argument that services didn’t deploy age authentication tools, when the legal compulsion to do so has been in conflict with the First Amendment for over a quarter-century, I wonder how we got to the point where the courts so casually disregard the constitutional limits on their authority.

Grindr tried a risky argument that everyone knows it’s a dangerous app, so basically, caveat user. Having flipped the argument around on the court, all of the sudden, the court doesn’t find the offline analogies so persuasive:

Grindr fails to offer convincing reasons why this Court should liken the Grindr app to alcohol and tobacco—products used for thousands of years—and rule that, as a matter of Florida law, there is widespread public knowledge and acceptance of the dangers associated with the Grindr app or that the benefits of the Grindr app outweigh the risk to minors.

Duty of Care

The court says TV adequately alleged that Grindr violated its duty of care:

Grindr’s alleged conduct created a foreseeable zone of risk of harm to A.V. and other minors. That alleged conduct, some affirmative in nature, includes launching the Grindr app “designed to facilitate the coupling of gay and bisexual men in their geographic area”; publicizing users’ geographic locations; displaying the image of the geographically nearest users first; representing itself as a “safe space”; introducing the “Daddy” “Tribe,” as well as the “Twink” “Tribe,” allowing users to “more efficiently identify” users who are minors; knowing through publications that minors are exposed to danger from using the Grindr app; and having the ability to prevent minors from using Grindr Services but failing to take action to prevent minors from using Grindr Services. These allegations describe a situation in which “the actor”—Grindr—”as a reasonable [entity], is required to anticipate and guard against the intentional, or even criminal, misconduct of others….

considering the vulnerabilities of the potential victims, the ubiquitousness of smartphones and apps, and the potential for extreme mental and physical suffering of minors from the abuse of sexual predators, the Florida Supreme Court likely would rule that public policy “lead[s] the law to say that [A.V. was] entitled to protection,” and that Grindr “should bear [the] given loss, as opposed to distributing the loss among the general public.”…Were Grindr a physical place people could enter to find others to initiate contact for sexual or other mature relationships, the answer to the question of duty of care would be obvious. That Grindr is a virtual place does not make the answer less so.

That last sentence is so painful. There are many reasons why a “virtual” place may have different affordances and warrant different legal treatment than “physical” space. For example, every aspect of a virtual space is defined by editorial choices about speech, which isn’t true in the offline world. The court’s statement implicates Internet Law Exceptionalism 101, and this judge–who was so thorough in other discussions–oddly chose to ignore this critical question.

IIED/NIED

It’s almost never IIED, and here there’s no way Grindr intended to inflict emotional distress on its users…right?

Wrong. The court says Grindr engaged in outrageous conduct based on the allegation that Grindr “served [minors] up on a silver platter to the adult users of Grindr Services intentionally seeking to sexually groom or engage in sexual activity with persons under eighteen.” I understand the court was making all inferences in favor of the plaintiff, but “silver platter”–seriously? The court ought to push back on such rhetorical overclaims rather than rubberstamp them to discovery.

The court also says that Grindr directed the emotional distress at TV and never discusses Grindr’s intent at all. I’m not sure how it can be IIED without that intent, but the court didn’t seem perturbed.

The NIED claim isn’t dismissed because of the assailants’ physical contact with TV, however distant that is from Grindr.

Negligent Misrepresentations

The court says that Grindr’s statement that it “provides a safe space where users can discover, navigate, and interact with others in the Grindr Community” isn’t puffery, especially when combined with Grindr’s express “right to remove content.” Naturally, this is a troubling legal conclusion when every TOS reserves the right to remove content, and the First Amendment provides that right as well, while the word “safe” has no well-accepted definition and could mean pretty much anything–and certainly doesn’t act as a guarantee that no harm will ever befall a Grindr user. Grindr’s TOS also expressly said that it didn’t verify users, and the court said it was still justifiable to rely on the word “safe” over the express statements why the site might not be safe.

Section 230

The prior discussion shows just how impossible it will be for Internet services to survive their tort exposure without Section 230 protection. If Section 230 doesn’t apply, then plaintiffs’ lawyers can always find a range of legal doctrines that might apply, with existential damages at stake if any of the claims stick. Because services can never plaintiff-proof their offerings to the plaintiff lawyers’ satisfaction, they have to settle up quickly to prevent those existential damages, or they have to exit the industry because any profit will be turned over to the plaintiffs’ lawyers.

Given the tenor of the court’s discussion about the prima facie claims, any guess how the Section 230 analysis goes?

The court starts with the premise that it’s not bound by any prior decisions:

The undersigned asked T.V. to state whether binding precedent exists on the scope of § 230(c)(1). T.V. responded, “This appears to be an issue of first impression in the Eleventh Circuit[.]” Grindr does not dispute that response.

The court is playing word games here. The court is discounting a well-known precedential case, Almeida v. Amazon from 2006. The court says Almeida’s 230(c)(1) discussion–precisely on point–was dicta. That ruling focused primarily on 230(e)(2), the IP exception to 230, but the case only reaches that issue based on the initial applicability of 230(c)(1). In addition, there are at least three non-precedential 11th Circuit cases interpreting Section 230(c)(1), including McCall v. ZotosDowbenko v. Google, and Whitney v. Xcentric (the court acknowledges the first two and ignores the Whitney case). These rulings may not be precedential, but they are indicators of how the 11th Circuit thinks of Section 230 and deserved some engagement rather than being ignored. The Florida federal court might also apply Florida state law, which includes the old Doe v. AOL decision from the Florida Supreme Court and numerous Florida intermediate appellate court rulings.

The court acknowledges an almost identical case from a Florida district court case, Doe v. Grindr, where Grindr prevailed on Section 230 grounds. This court says that judge relied on “non-binding cases”–but if there are no binding 11th Circuit rulings, what else was that court supposed to do? And this court has already established that it will also rely on non-binding cases, so doesn’t pointing this out also undercut the court’s own opinion? The court also acknowledges MH v. Omegle, not quite identical to Grindr but pretty close and also a 230 defense-side win. This court also disregards it because it relied on “non-binding cases.”

This explains how the court treats ALL precedent as presumptively irrelevant so that it can treat Section 230 as a blank interpretative slate despite hundreds of precedent cases. The court thus forges its own path, redoes 230 analyses that have been done in superior fashion previously dozens of times, and cherrypicks precedent that supports its predetermined conclusion–a surefire recipe for problematic decisions. So unfortunate.

The court says “The meaning of § 230(c)(1) is plain. The provision, therefore, must be enforced according to its terms.” Because the language is so plain 🙄, the court uses dictionary definitions of “publisher” and “speaker” (seriously). It says that the CDA “sought to protect minors and other users from offensive content and internet-based crimes” (basically ignoring the legislative history), and because the CDA exhibited schizophrenia about its goals (something fully explained in the literature–extensively–but the court didn’t look), the court thinks it should “avoid the predominance of some congressional purposes over others, the provision should be interpreted neither broadly nor narrowly.”

Reminder: the Almeida opinion, in language this court chooses to ignore, said “The majority of federal circuits have interpreted the CDA to establish broad ‘federal immunity to any cause of action that would make service providers liable for information originating with a third-party user of the service’” (citing Zeran, emphasis added).

Having gone deeply rogue, the court says none of the plaintiff’s common law claims treat Grindr as the publisher of third-party content. “Grindr is responsible, in whole or in part, for the “Daddy” “Tribe,” the “Twink” “Tribe,” the filtering code, the “safe space” language, and the geolocation interface. To the extent the responsible persons or entities are unclear, discovery, not dismissal, comes next.”

The court acknowledges that “Grindr brings to the Court’s attention many cases” supporting Grindr’s Section 230 arguments, including the Fifth Circuit’s old Doe v. MySpace case. To “explain” why these “many cases” don’t count, the court marshals up the following citations: Justice Thomas’ statement in MalwarebytesJustice Thomas’ statement in Doe v. SnapJudge Katzmann’s dissent in Force v. Facebook, Judge Gould’s concurrence/dissent in Gonzalez v. Google (which was likely rendered moot by the Supreme Court’s punt on the case), and randomly, a single district court case from Oregon (AM v. Omegle). Notice a theme here? The court is relying exclusively on non-binding precedent–indeed, other than the Omegle ruling, not even “precedent” at all.

With zero trace of irony, after this dubious stack of citations, the court says it can ignore Grindr’s citations because “MySpace and the other cases on which Grindr relies are non-binding and rely on non-binding precedent.” Hey judge…the call is coming from inside the house…

(I could have sworn this was the work of a TAFS judge, especially with the shoutouts to Justice Thomas’ non-binding statements, the poorly researched conclusions, and cherrypicked citations. But no, Magistrate Judge Barksdale appears to be an Obama appointee).

Because this is a magistrate report, it will be reviewed by the supervising judge. For all of its prolixity, it’s shockingly poorly constructed and has many sharp edges. Grindr has unsurprisingly filed objections to the report. I’m sure this case will be appealed to the 11th Circuit regardless of what the supervising judge says.

A.S. v. Salesforce, Inc., 2024 WL 4031496 (N.D. Tex. Sept. 3, 2024)

Another FOSTA sex trafficking case against Salesforce for providing services to Backpage. The court previously rejected the Section 230 defense in a factually identical case (SMA v. Salesforce) and summarily rejects it this time.

In yet another baroque and complex opinion that’s typical for FOSTA cases, the court greenlights one claim of tertiary liability against Salesforce but rejects a different tertiary liability claim. If I thought there was value to trying to reconcile those conclusions, I would do it to benefit my readers. Instead I was baffled by the court’s razor-thin distinctions about the various ecosystem players’ mens rea and actus rea (another common attribute of FOSTA decisions).

ProcureNet Ltd. v. Twitter, Inc., 2024 WL 4290924 (Cal. App. Ct. Sept. 25, 2024)

The plaintiffs were heavy Twitter advertisers, spending over $1M promoting their accounts. Twitter suspended all of the accounts in 2022 (pre-Musk) for alleged manipulation and spam. The plaintiffs claim they were targeted by a brigading attack, but allegedly Twitter disregarded their evidence of that. Eventually, the brigading attack took out the plaintiffs’ personal accounts too. The plaintiffs claim Twitter breached its implied covenant of good faith and fair dealing. Twitter filed an anti-SLAPP motion to strike.

The court says that Twitter’s actions related to a matter of public interest. However, the court says the plaintiffs’ claims have enough merit to overcome the anti-SLAPP motion.

Twitter argued that Section 230 protected its decisions. The court disagrees: “the duty Twitter allegedly violated derives from its Advertising Contracts with plaintiffs, not from Twitter’s status as a publisher of plaintiffs’ content.”

Twitter cited directly relevant California state court decisions in Murphy and Prager that said Section 230 could apply to contract-based claims that would override the service’s editorial discretion, but the court distinguishes them: “These cases, however, do not address claims that a provider breached a separate enforceable agreement for which consideration was paid, like the Advertising Contracts here.” This makes no sense. Whether or not cash was involved, the Murphy and Prager cases involved mutual promises supported by contract consideration. In other words, in each case, the defendant had a contract agreeing to provide services to the plaintiff that the plaintiff valued, so I don’t see any basis to distinguish among these cases. The court might have found better support by citing the also-on-point Calise and YOLO Ninth Circuit cases, but neither case was cited.

Beyond the Section 230 argument, Twitter said that its contracts reserved the unrestricted discretion to deny services. The court says that the unrestricted discretion might still be subject to the implied covenant of good faith and fair dealing: “the purpose of the Advertising Contracts here was not to give Twitter discretion—its purpose, as alleged in plaintiffs’ complaint, was to buy advertising for plaintiffs’ accounts on Twitter’s platform.” In other words, the court effectively reads the reservation of discretion out of the contract entirely.

How bad a loss is this? The plaintiffs had moved to voluntarily dismiss the case while it was on appeal, so they no-showed at the appeal and the court ruled on uncontested papers filed only by Twitter. Ouch. The voluntary dismissal also makes this decision into something of an advisory opinion, and I’m surprised the court decided to issue it rather than deem the appeal moot.

BONUS: Corner Computing Solutions v. Google LLC, 2024 WL 4290764 (W.D. Wash. Sept. 25, 2024). This is also an implied covenant of good faith and fair dealing case. The plaintiff thinks Google should have removed some allegedly fake reviews. The court says the TOS never promised the removal of those reviews in its TOS, but some ancillary disclosures might have implied that Google would. Thus, despite dismissing the case, the court has some sharp words for Google:

It may be misleading for Defendant to state in a policy that fake engagement will be removed while admitting in its briefing that its policies are merely aspirational. But that does not make Defendant’s actions here a breach of contract.