Apple Has To Pull Its “AI” News Synopses Because They Were Routinely Full Of Shit

Techdirt. 2025-01-29

While “AI” (language learning models) certainly could help journalism, the fail upward brunchlords in charge of most modern media outlets instead see the technology as a way to cut corners, undermine labor, and badly automate low-quality, ultra-low effort, SEO-chasing clickbait.

As a result we’ve seen an endless number of scandals where companies use LLMs to create entirely fake journalists and hollow journalism, usually without informing their staff or their readership. When they’re caught (as we saw with CNETGannett, or Sports Illustrated), they usually pretend to be concerned, throw their AI partner under the bus, then get right back to doing it.

Big tech companies, obsessed with convincing Wall Street they’re building world-changing innovation and real sentient artificial intelligence (as opposed to unreliable, error-prone, energy-sucking, bullshit machines), routinely fall into the same trap. They’re so obsessed with making money, they’re routinely not bothering to make sure the tech in question works.

For example, last December Apple faced criticism after its Apple Intelligence “AI” feature was found to be sending inaccurate news synopses to phone owners:

“This week, the AI-powered summary falsely made it appear BBC News had published an article claiming Luigi Mangione, the man arrested following the murder of healthcare insurance CEO Brian Thompson in New York, had shot himself. He has not.”

Yeah, whoops. So recently, Apple pulled the feature offline:

“On Thursday, Apple deployed a beta software update to developers that disabled the AI feature for news and entertainment headlines, which it plans to later roll out to all users while it works to improve the AI feature. The company plans to re-enable the feature in a future update.

As part of the update, the company said the Apple Intelligence summaries, which users must opt into, will more explicitly emphasize that the information has been produced by AI, signaling that it may sometimes produce inaccurate results.”

There’s a reason these companies haven’t been quite as keen to fully embraced AI across the board (for example, Google hasn’t implemented Gemini into hardware voice assistants), because they know there’s potential for absolute havoc and legal liability. But they had no problem rushing to implement AI in journalism to help with ad engagement; making it pretty clear how much these companies tend to value actual journalism in the first place.

We’ve seen the same nonsense over at Microsoft, which was so keen to leverage automation to lower labor costs and glom onto ad engagement that they rushed to implement AI across the entirety of their MSN website, never really showing much concern for the fact the automation routinely produced false garbage. Google’s search automation efforts have been just as sloppy and reckless.

Language learning models and automation certainly have benefits, and certainly aren’t going anywhere. But there’s zero real indication most tech or media companies have any interest in leveraging undercooked early iterations responsibly. After all, there’s money to be made. Which is, not coincidentally, precisely how many of these companies treated the dangerous privacy implications of industrialized commercial surveillance for the better part of the last two decades.