35 notable AI fails from 2025
beSpacific 2025-12-19
Indicator: “Just because it’s “intelligent” doesn’t mean it’s always right. Errors are a wonderful thing. That may be a strange thing for a former fact-checker t/o write in a newsletter about digital deception, but bear with me. Errors are often funny, because – like good jokes – they subvert meaning in unexpected ways. I recently wrote to a reporter that “as the more established websites come under sustained regulatory pressure […] the winnows are ready to try and capture market share.” The reporter very politely wrote back to check whether I meant “minnows.” Mrs Malaprop is hilarious for a reason. But there’s more to errors than unintended humor. Errors can reveal what their authors wish to be true, or expose a breakdown in verification processes. Before tech platforms dominated how we exchange and consume information, traditional media errors were a big deal. The same year that Mark Zuckerberg launched thefacebook.com, Indicator co-founder Craig Silverman launched Regret the Error, a blog about media errors (one of the two was more profitable than the other). For several years, Craig wrote a column about the most notable media slip-ups that eventually moved to the Poynter Institute’s website. I picked up that feature from 2015 to 2018.
21 years later, tech platforms play a huge editorial role in our information diets and greatly influence how we apportion our attention. While Facebook, X, and others briefly dabbled in curating the news explicitly – with mixed results – they mostly shied away from becoming publishers. That meant they didn’t typically make the same kind of mistakes as traditional media outlets. That has changed with AI. Chatbots are increasingly treated as a direct source, or publisher, of information. Models produce net new content, including articles, images, videos, and more. Tracking LLM fails is one way to surface the potential biases and ineffective fact-checking infrastructure of the big tech companies. AI fails are of course at heart human fails, even if the person responsible isn’t always obvious. Responsibility for an error might be down to the authors of the information in the training data, the developers of the AI tool, or the end users. What’s certain is that AI errors can have significant consequences. I was at Google when it lost billions in market value because of Gemini’s diverse Nazis woes. These errors also reveal a lot about how general purpose AI tools work and how society is using them. So here’s a collection of notable AI fails from the past twelve months. Our list excludes intentional misuse of AI. We are not interested (for once!) in disinformation or scams, only misinformation and slop. I also left out acute AI-enabled harms like self-harm and image-based sexual abuse because they are less about accuracy and more about persuasion and safety.