Biglaw AI Apocalypse Brews As One Fake Case Turns Into Litany Of False Cites

beSpacific 2025-08-05

Above The Law: “After confessing the firm submitted a post-trial motion citing a non-existent case helpfully supplied by ChatGPT (Opens in a new window), Goldberg Segalla cut ties with partner Danielle Malaty. Presumably, the firm hoped that would be the end of it. But it seems that filing was just the amuse-bouche! Upon realizing that Goldberg Segalla had inserted one fake citation, the plaintiffs wondered if it might be worth another quick glance at the docket. You know… because most lawyers enjoy chasing down important research instead of shrugging it off as good enough. A newly filed motion for sanctions (Opens in a new window) in Jordan v. Chicago Housing Authority suggests the plaintiffs made the right move, outlining a systemic AI hallucination jamboree going well beyond an errant cite to the fictitious Mack v. Anderson case that started this ball rolling. After years of litigation, an Illinois jury awarded $24.1 million in this pediatric lead poisoning case where two children were left with irreversible brain damage. Rather than owning up to that outcome, the Chicago Housing Authority (CHA to its friends) and Goldberg Segalla opted to litigate the verdict into oblivion. It was the motion asking the judge to set aside the verdict that first uncovered that the Housing Authority’s legal arsenal might be cobbled together from AI-generated fan fiction. Over the course of five days, having only the opportunity to review a slice of the docket, the plaintiffs discovered that when it comes to hallucinated research, not unlike Lay’s Potato Chips, betcha can’t cite just one.”