An Economic Model of Intermediary Liability

The Laboratorium 2023-04-20

I have posted a draft of a new article, An Economic Model of Intermediary Liability. It’s a collaboration with Pengfei Zhang, who I met when he was a graduate student in economics at Cornell, and is now a professor at UT Dallas. He was studying the economics of content takedowns, and participated in my content-moderation reading group. We fell to talking about how to model different intermediary liability regimes, and after a lot of whiteboard diagrams, this paper was the result. I presented it at the Berkeley Center for Law & Technology Digital Services Act Symposium, and the final version will be published in the Berkeley Technology Law Journal later this year.

I’m excited about this work for two reasons. First, I think we have developed a model that hits the sweet spot between elegance and expressiveness. It has only a small number of moving parts, but the model demonstrates off all of the standard tropes of the debates around intermediary liability: collateral censorship, the moderator’s dilemma, the tradeoffs between over- and under-moderation, and more. It can describe a wide range of liability regimes, and should be a useful foundation for future economic analysis of online liability rules. Second, there are really pretty pictures. Our diagrams make these effects visually apparent; I hope they will be useful in building intuition and in thinking about the design of effective regulatory regimes.

Here is the abstract:

Scholars have debated the costs and benefits of Internet intermediary liability for decades. Many of their arguments rest on informal economic arguments about the effects of different liability rules. Some scholars argue that broad immunity is necessary to prevent overmoderation; others argue that liability is necessary to prevent undermoderation. These are economic questions, but they rarely receive economic answers.

In this paper, we seek to illuminate these debates by giving a formal economic model of intermediary liability. The key features of our model are externalities, imperfect information, and investigation costs. A platform hosts user-submitted content, but it does not know which of that content is harmful to society and which is beneficial. Instead, the platform observes only the probability that each item is harmful. Based on that knowledge, it can choose to take the content down, leave the content up, or incur a cost to determine with certainty whether it is harmful. The platform’s choice reflects the tradeoffs inherent in content moderation: between false positives and false negatives, and between scalable but more error-prone processes and more intensive but costly human review.

We analyze various plausible legal regimes, including strict liability, negligence, blanket immunity, conditional immunity, liability on notice, subsidies, and must carry, and we use the results of this analysis to describe current and proposed laws in the United States and European Union.

We will have an opportunity to make revisions, so comments are eagerly welcomed!