It’s open season on the unabashedly earnest

Statistical Modeling, Causal Inference, and Social Science 2026-01-07

This is Jessica. In response to my post on slop, Thomas Basbøll shared a 1967 New Yorker essay by Jacob Brackman about the havoc wreaked by the emergence of the “Put-On” in 1960s (and slightly earlier) art and culture. True to its name, the “Put-On” refers to a response that is deliberately outlandish yet ambiguous about intention, confusing the other party and causing them to doubt its sincerity. 

The put-on is perhaps best exemplified by Bob Dylan’s smart alecky style of responding to interviewers, in which he alternates between crazy stories about his past, exasperation with the counter-culture of which he’s part, and pointed questions turned back on the interviewer. Who is left to wonder, How much of this is real? Is he caricaturing himself, or is this actually his personality? But the put-on also appears in art and culture more broadly – e.g., Is John Cage making an important statement or just putting the audience on with these silence performances? Is Andy Warhol out to make fools of his critics with the Brillo boxes? The put-on is unsettling because you cannot resolve whether meaning is intended or still to come, or you are just wasting your time: “put-ons may disguise the fact that someone has nothing of interest to say—may, indeed, give precisely the opposite impression.” 

Today the put-on takes different forms – video shorts of animals doing things that are just beyond the boundary of what seems plausible, enough so that we need to watch a second time to figure out if it’s real. Essays or presentations by our students that elude a little too much confidence given their lack of experience with the topic, but which they deny using generative AI to write. There has always been plenty of bullshit on the internet, and plenty of cheating in classes, but Brackman’s stages of the put-on are especially familiar lately:

  1. You’re sucked in.
  2. You become confused.
  3. You resent (or appreciate) having been tricked. 

Patience games

The problem with the put-on–whether orchestrated by musicians or artists in the 60s or today’s language models and image generators–is that the ambiguity is strategic. You don’t know if it is going somewhere. You’re stuck sitting with your uncertainty, reflecting on how far your good naturedness extends. 

In teaching, when you think you’re facing undisclosed (over)reliance on generative AI for an assignment, do you take the sincere path of asking the student what they did, and trusting their response? Do you try to catch them in a lie? Or do you decide it’s just not worth your time to sleuth and let the students decide for themselves if they will use the course to learn something versus play the game?

We find ourselves facing games we may not want to play, and for which we have no precedent. This year ICML, one of the big machine learning conferences, is offering authors a choice: opt-in to a permissive policy about generative AI use in reviewing, or go the purist route, where your reviewers can’t use it at all, and you can’t use it at all for your own reviews either. The reviewer matching process sounds like it could get messy, and ML conferences are already known for their review randomness. Which option is likely to be less noisy? 

Not to mention that as a reviewer, one must increasingly wonder whether the paper they are preparing their comments on is an experiment in automated science. Will the authors even read your feedback? Do they care to improve the work? Or have you been inadvertently reduced to a Turing signal? 

It’s not easy for the “unabashedly earnest”, who dislike playing games and want to retain a certain innocence in their encounters with others, but who also want to stay ahead of the curve and not get duped. The put-on depends on the gullibility of its victim, so you face a choice of being ok with continuing as usual but feeling used at times, or becoming more skeptical about people in general. Please don’t make me part of your game is becoming the refrain for a new way of life. 

There’s little reason to think it will get better anytime soon. It’s still early and many people are still playing the old game, or still experimenting with how much they can get generative AI to do. We should be preparing for more disruption. 

From the sacred to the profane probabilistic

I find myself thinking about what kinds of signals I consider more sacred, i.e., that I would most dread seeing lose their meaning. For example, what do you do about undisclosed use of generative AI in close relationships? What if you suspect the friend or romantic partner you are corresponding with is relying on the AI suggested responses to do the thinking? Do you ask them about it, or let it go and risk the uncertainty undermining your ability to trust them?

I would also distinguish feedback on writing that is more personal. I don’t mind an AI-generated review on my research if it’s guided by a human with the right expertise. But if, for example, I was to learn that comments on my posts here that I took seriously as a reflection of engagement with what I wrote or that just gave me a rewarding feeling of connecting with people outside my usual sphere (which blogging is great for), I would feel dumb, and it would probably affect my desire to blog. But this is already happening on social media, with bot accounts jumping in with random, effusive compliments on what you write. 

Another scenario that makes me cringe is the application of generative AI to the kinds of art and literature that I get inspiration from. I can potentially enjoy some AI-generated music or script folded into the mundane background track or sitcom if it’s decent, but I look to art museums for a kind of consolation on what it means to be human, to be vulnerable, to feel forms of loss on a deep level. I don’t doubt that generative AI could occasionally result in experiences that would be hard for me to distinguish from human contemporary art. But I can’t imagine myself ever getting interested in art created by AI the way I’m interested in what other people make, because of the lack of specificity or intention. So if it were to infiltrate that realm, and I could no longer count on there being a human lived experience behind art, it would bother me. 

One thing I feel relatively sure of is that I won’t be wanting an AI guru. I wouldn’t be surprised if generative AI could do a pretty good job of mimicking the kind of capriciousness associated with spiritual guides like Zen masters. But similar to art, there’s something important about the person having experiences in the world that feels essential. 

I would be curious though to hear counterarguments from people who have thought about AI in art or religion or more intimate personal communication. Part of what I find difficult in all this is that I consider myself generally optimistic about new technology, and open to change from it (I am, after all, a computer scientist). So I would also hate to prematurely “close my ears” like a square in the 50s or 60s walking out on Cage’s experiments in sound. And so I expect my patience to remain unstable, and it to remain hard for me to predict what experiences will give me the urge to ditch versus hit rewind.

Institutional unraveling

Returning to the general theme of new decision points as signals erode in value, things are likely to get worse before they get better. Many of our systems are still mostly functioning at this point, because many people are still figuring out how to use generative AI, and where to draw their own line, or they are avoiding it completely. But the seeds for institutional breakdown are all around us. 

According to Zeynep Tufecki’s recent keynote at NeurIPS (which I summarize here), the problem is that society is built on assumptions that certain things will be hard (or “load bearing frictions”), i.e., that only humans can generate outputs with certain properties. LLMs break our ability to conclude there is proof of effort, or of authenticity or sincerity. Gatekeeping is a necessary function, and when the old mechanisms stop working, other measures will step in, like relying on the prestige of the candidate’s institution or their connections to decide who to hire, or what papers to cite or publish. When those things are no longer hard, some mechanism must step in in its place, and it may not be ideal. The point being if you break something important, you don’t necessarily get something better unless you build something better. 

I like how this view focuses attention on outcomes within the realm of our ability to predict, like what kinds of gatekeeping will emerge or are already emerging to fill in the holes. We can then try to identify better alternatives to those, rather than trying to predict when “AGI” will happen or what the most destructive thing AI could do is. Though it doesn’t absolve us of the very humanist discomfort of watching our precious tokens of sincerity wash away, and the personal choices that come with that. 

Brackman quotes P. T. Barnum on how “People like to be fooled,” and “There’s a sucker born every minute.” While the put-on has always relied on the victim’s willingness to stay in the conversation, the answer is unlikely to be opting out of dealing with AI output entirely (though there are certainly people in that camp). Some flexibility is warranted while norms are still shifting, and organizations are doing the right thing by experimenting with new policies. But until we have better signals, the burden of the put-on stays where it was: on the person deciding in that moment whether to continue listening.