Large Language Models for Legal Interpretation? Don’t Take Their Word for It

beSpacific 2025-04-15

Waldon, Brandon and Schneider, Nathan and Wilcox, Ethan and Zeldes, Amir and Tobia, Kevin, Large Language Models for Legal Interpretation? Don’t Take Their Word for It (February 03, 2025). Georgetown Law Journal, Vol. 114 (forthcoming), Available at SSRN: https://ssrn.com/abstract=5123124 or http://dx.doi.org/10.2139/ssrn.5123124

“Recent breakthroughs in statistical language modeling have impacted countless domains, including the law. Chatbot applications such as ChatGPT, Claude, and DeepSeek – which incorporate ‘large’ neural network–based language models (LLMs) trained on vast swathes of internet text – process and generate natural language with remarkable fluency. Recently, scholars have proposed adding AI chatbot applications to the legal interpretive toolkit. These suggestions are no longer theoretical: in 2024, a U.S. judge queried LLM chatbots to interpret a disputed insurance contract and the U.S. Sentencing Guidelines. We assess this emerging practice from a technical, linguistic, and legal perspective. This Article explains the design features and product development cycles of LLM-based chatbot applications, with a focus on properties that may promote their unintended misuse – or intentional abuse – by legal interpreters. Next, we argue that legal practitioners run the risk of inappropriately relying on LLMs to resolve legal interpretative questions. We conclude with guidance on how such systems – and the language models which underpin them – can be responsibly employed alongside other tools to investigate legal meaning.”