Court bars expert from accessing AI source code in high profile copyright case

internetcases » cases 2025-07-15

expert testimony

Plaintiffs sued Stability AI for misusing their copyrighted artwork to train generative AI models. As part of the lawsuit, plaintiffs sought to designate a University of Chicago professor and AI researcher as an expert. Plaintiffs asserted it would be necessary to disclose defendants’ highly confidential materials to the professor.

Defendants asked the court to block disclosure of their “ATTORNEYS’ EYES ONLY” and “HIGHLY CONFIDENTIAL – SOURCE CODE” material to the expert. They argued that the expert’s work – particularly concerning his projects designed to “poison” or “protect against” generative AI training – put him in “functional competition” with their models.

The court ruled in favor of defendants because it found the risk of competitive harm outweighed plaintiffs’ need for the expert testimony. Although the expert was a respected academic and not a commercial competitor, the court reasoned that his tools were designed to interfere with AI model performance and that he continued to develop such technologies. Even unintentional use of confidential information could influence his future work.

Additionally, the court found that while the expert was well-qualified, he was not uniquely qualified. Plaintiffs could rely on other experts in the AI field, including one of the expert’s former students, who was approved as an expert in a related case.

Andersen v. Stability AI Ltd., 2025 WL 1927796 (N.D. Cal., July 14, 2025)