AI and the law
Language Log 2023-10-15
Article in LAist (10/12/230;
This Prolific LA Eviction Law Firm Was Caught Faking Cases In Court. Did They Misuse AI?
Dennis Block runs what he says is California’s “leading eviction law firm.” A judge said legal citations submitted in Block's name for a recent case were fake. Six legal experts told LAist the errors likely stemmed from AI misuse.
By David Wagner
-
- Dennis P. Block and Associates, which describes itself as California’s “leading eviction law firm,” was recently sanctioned by an L.A. County Superior Court judge over a court filing the judge found contained fake case law.
- Six legal experts told LAist there’s a likely explanation behind the filing’s errors: misuse of a generative artificial intelligence program. They said they thought Block’s filing bears striking similarities to a brief prepared by a New York attorney who admitted to using ChatGPT back in May.
- Block’s firm was ordered to pay $999 over the violation. That’s $1 below the threshold that would have required the firm to report the sanction to the state bar for further investigation and possible disciplinary action.
- In interviews with three former clients and a review of 12 malpractice or negligence lawsuits filed against Block or his firm, LAist found more allegations of mishandled evictions.
When landlords in Southern California want to evict their tenants, they often hire Dennis Block. His law firm, Dennis P. Block and Associates, describes itself as the state’s “leading eviction law firm.”
Block once reportedly called himself, “A man who has evicted more tenants than any other human being on the planet Earth.”
First thing that popped into my mind: I wonder if he is related to H. & R. Block.
But in one recent eviction case, Block didn’t just lose. His firm was also sanctioned for submitting a court filing a judge said was “rife with inaccurate and false statements.”
At first glance, the filing from April looks credible. It’s properly formatted. Block’s signature at the bottom lends a stamp of authority. Case citations are provided to bolster Block’s argument for why the tenant should be evicted.
But when L.A. Superior Court Judge Ian Fusselman took a closer look, he spotted a major problem. Two of the cases cited in the brief were not real. Others had nothing to do with eviction law, the judge said.
“This was an entire body of law that was fabricated,” Fusselman said during the sanction hearing. “It's difficult to understand how that happened.”
The court never got to the bottom of exactly how the filing was prepared. But six legal experts told LAist they could think of a likely explanation: misuse of a generative AI program.
These programs, the best known of which is ChatGPT, have come under increasing scrutiny in the legal profession. While some lawyers see potential for reducing costs to clients, experts agree that failing to check work produced by such tools is risky and unethical.
Law professors and malpractice attorneys who reviewed Block’s filing told us — based on the language used — that’s likely what happened in this case.
“I think it's virtually certain that the lawyer involved used some kind of [generative] artificial intelligence program to draft the brief,” said Russell Korobkin, a professor at UCLA School of Law who recently moderated a panel on AI in the legal profession.
,,.
So far it's a lot of suspicion, speculation, and accusation, but no proof.
Legal experts told us they thought the filing from Block’s firm that led to sanctions bears striking similarities to a brief prepared by a New York attorney who admitted in May to using ChatGPT. It was the first widely reported example of an attorney misusing ChatGPT since the tool debuted last year.
Like the New York filing, Block’s brief falls apart upon checking the case citations. It cited 51 Scott Street, LLC v. Sheehan (2019) and Cole v. Stevenson (1998), both of which are fictitious, according to the judge.
“This filing has the usual hallmarks of what's known as a hallucination,” said Jonathan Choi, a professor at USC Gould School of Law who reviewed the brief at LAist’s request.
Hallucinations are a known problem in which programs like ChatGPT “tend to produce things that look convincing, but actually have no basis in reality,” he said.
Chris Hoofnagle, a professor at UC Berkeley’s School of Law, said the problem stems from how platforms like ChatGPT and other large language models (LLMs, for short) work.
Based on a user’s prompt, they mine vast troves of data to predict words that should come next in a sentence. They can produce stunningly detailed documents full of words seemingly written by a human. But often those words bear no relation to the truth.
“LLMs can generate fake information that basically is what you want to believe,” Hoofnagle said. “LLMs say these things in such an unqualified and confident way that they're convincing.”
…
Ari Waldman, a UC Irvine School of Law professor, said tools like ChatGPT — if left unchecked — could lead to miscarriages of justice.
If you read the report of the New York filing where a lawyer admitted that he used ChatGPT, you will see that he was "unaware of the possibility that its contents could be false." This is proof positive that, if you ever use AI to prepare any important documents, you need to scrupulously check all of its statements. Some of them could be hallucinations, even though they may look genuine.
Selected readings
- "ChatGPT does Emily Dickinson writing a recipe for Pad Thai (and haiku too)" (6/9/23)
- "Pablumese" (3/22/23)
- "The mind of artificial intelligence" (3/22/23)
- "Style? Stance? What?" (10/27/18)
- "An example of ChatGPT 'hallucinating'?" (4/16/23)
- "Desultory philological, literary, and historical notes on Xanadu" (4/4/23)
- "Hallucinations: In Xanadu did LLMs vainly fancify" (4/3/23)
- "Detecting LLM-created essays?" (12/20/22)
- "Alexa down, ChatGPT up?" (12/8/22)
- "Bing gets weird — and (maybe) why" (2/16/23)
- "ChatGPT-4: threat or boon to the Great Firewall?" (3/21/23)
- "ChatGPT writes VHM" (2/28/23)
- "ChatGPT: Theme and Variations" (2/21/23)
- "GLM-130B: An Open Bilingual Pre-Trained Model" (1/25/2023)
- "ChatGPT writes Haiku" (12/21/22)
- "Artificial Intelligence in Language Education: with a note on GPT-3" (1/4/23)
- "DeepL Translator" (2/16/23)
- "Uh-oh! DeepL in the classroom; it's already here" (2/22/23)
- "This is the 4th time I've gotten Jack and his beanstalk" (3/15/23)
- "ChatGPT does cuneiform studies" (5/21/23)
- "The perils of AI (Artificial Intelligence) in the PRC" (4/17/23)
- "Vignettes of quality data impoverishment in the world of PRC AI" (2/23/23)
- "Translation and analysis" (9/13/04)
- "Welcome to China" (3/10/14)
- "LLMs as coders?" (6/6/23)
- "ChatGPT has a sense of humor (sort of)" (6/10/23)
- "The AI threat: keep calm and carry on" (6/29/23)
[Thanks to Kent McKeever]