More Updates!

Shtetl-Optimized 2023-11-27

For the many friends who’ve asked me to comment on the OpenAI drama: while there are many things I can’t say in public, I can say I feel relieved and happy that OpenAI still exists. This is simply because, when I think of what a world-leading AI effort could look like, many of the plausible alternatives strike me as much worse than OpenAI, a company full of thoughtful, earnest people who are at least asking the right questions about the ethics of their creations, and who—the real proof that they’re my kind of people—are racked with self-doubts (as the world has now spectacularly witnessed). Maybe I’ll write more about the ethics of self-doubt in a future post.

For now, the narrative that I see endlessly repeated in the press is that last week’s events represented a resounding victory for the “capitalists” and “businesspeople” and “accelerationists” over the “effective altruists” and “safetyists” and “AI doomers,” or even that the latter are now utterly discredited, raw egg dripping from their faces. I see two overwhelming problems with that narrative. The first problem is that the old board never actually said that it was firing Sam Altman for reasons of AI safety—e.g., that he was moving too quickly to release models that might endanger humanity. If the board had said anything like that, and if it had laid out a case, I feel sure the whole subsequent conversation would’ve looked different—at the very least, the conversation among OpenAI’s employees, which proved decisive to the outcome. The second problem with the capitalists vs. doomers narrative is that Sam Altman and Greg Brockman and the new board members are also big believers in AI safety, and conceivably even “doomers” by the standards of most of the world. Yes, there are differences between their views and those of Ilya Sutskever and Adam D’Angelo and Helen Toner and Tasha McCauley (as, for that matter, there are differences within each group), but you have to drill deeper to articulate those differences.

In short, it seems to me that we never actually got a clean test of the question that most AI safetyists are obsessed with: namely, whether or not OpenAI (or any other similarly constituted organization) has, or could be expected to have, a working “off switch”—whether, for example, it could actually close itself down, competition and profits be damned, if enough of its leaders or employees became convinced that the fate of humanity depended on its doing so. I don’t know the answer to that question, but what I do know is that you don’t know either! If there’s to be a decisive test, then it remains for the future. In the meantime, I find it far from obvious what will be the long-term effect of last week’s upheavals on AI safety or the development of AI more generally. For godsakes, I couldn’t even predict what was going to happen from hour to hour, let alone the aftershocks years from now.


Since I wrote a month ago about my quantum computing colleague Aharon Brodutch, whose niece, nephews, and sister-in-law were kidnapped by Hamas, I should share my joy and relief that the Brodutch family was released today as part of the hostage deal. While it played approximately zero role in the release, I feel honored to have been able to host a Shtetl-Optimized guest post by Aharon’s brother Avihai. Meanwhile, over 180 hostages remain in Gaza. Like much of the world, I fervently hope for a ceasefire—so long as it includes the release of all hostages and the end of Hamas’s ability to repeat the Oct. 7 pogrom.


Greta Thunberg is now chanting to “crush Zionism” — ie, taking time away from saving civilization to ensure that half the world’s remaining Jews will be either dead or stateless in the civilization she saves. Those of us who once admired Greta, and experience her new turn as a stab to the gut, might be tempted to drive SUVs, fly business class, and fire up wood-burning stoves just to spite her and everyone on earth who thinks as she does.

The impulse should be resisted. A much better response would be to redouble our efforts to solve the climate crisis via nuclear power, carbon capture and sequestration, geoengineering, cap-and-trade, and other effective methods that violate Greta’s scruples and for which she and her friends will receive and deserve no credit.

(On Facebook, a friend replied that an even better response would be to “refuse to let people that we don’t like influence our actions, and instead pursue the best course of action as if they didn’t exist at all.” My reply was simply that I need a response that I can actually implement!)