The Year in Slop

beSpacific 2025-12-18

This was the year that A.I.-generated content passed a kind of audiovisual Turing test, Kyle Chayka argues [no paywall] – The New Yorker – This was the year that A.I.-generated content passed a kind of audiovisual Turing test, sometimes fooling us against our better judgment. “The Turing test, a long-established tool for measuring machine intelligence, gauges the point at which a text-generating machine can fool a human into thinking it’s not a robot. ChatGPT passed that benchmark earlier this year, inaugurating a new technological era, though not necessarily one of superhuman intelligence. More recently, however, artificial intelligence passed another threshold, a kind of Turing test for the eye: the images and videos that A.I. can produce are now sometimes indistinguishable from real ones. As new, image-friendly models were trained, refined, and released by companies including OpenAI, Meta, and Google, the online public gained the ability to instantly generate realistic A.I. content on any theme they could imagine, from superhero fan art and cute animals to scenes of violence and war. “Slop,” the term of (not) art for content churned out with A.I., became ubiquitous in 2025, inspiring new sub-coinages such as “slopper,” derogatory shorthand for someone who relies on A.I. to think for them. Slop went beyond the realms of surreal amusement or frivolous entertainment; the relatively anodyne days of bizarre, obviously fake “Shrimp Jesus” images in Facebook feeds are gone. In 2025, the President of the United States relied on A.I. “agitslop” to promote his policies and taunt his detractors, and other politicians followed suit. Sam Altman, the C.E.O. of OpenAI, became a kind of omnipresent mascot on Sora, his own company’s social-media feed of slop. Not all of the content was convincing, but a lot of it came close enough—and, in our increasingly audiovisual digital world, that may turn out to represent a more significant Rubicon than the Turing test…”