Sad times for AI safety
Shtetl-Optimized 2024-10-02
Many of you will have seen the news that Governor Gavin Newsom has vetoed SB 1047, the groundbreaking AI safety bill that overwhelmingly passed the California legislature. Newsom gave a disingenuous explanation (which no one on either side of the debate took seriously), that he vetoed the bill only because it didn’t go far enough (!!) in regulating the misuses of small models. While sad, this doesn’t come as a huge shock, as Newsom had given clear prior indications that he was likely to veto the bill, and many observers had warned to expect him to do whatever he thought would most further his political ambitions and/or satisfy his strongest lobbyists. In any case, I’m reluctantly forced to the conclusion that either Governor Newsom doesn’t read Shtetl-Optimized, or else he somehow wasn’t persuaded by my post last month in support of SB 1047.
Many of you will also have seen the news that OpenAI will change its structure to be a fully for-profit company, abandoning any pretense of being controlled by a nonprofit, and that (possibly relatedly) almost no one now remains from OpenAI’s founding team other than Sam Altman himself. It now looks to many people like the previous board has been 100% vindicated in its fear that Sam did, indeed, plan to move OpenAI far away from the nonprofit mission with which it was founded. It’s a shame the board didn’t manage to explain its concerns clearly at the time, to OpenAI’s employees or to the wider world. Of course, whether you see the new developments as good or bad is up to you. Me, I kinda liked the previous mission, as well as the expressed beliefs of the previous Sam Altman!
Anyway, certainly you would’ve known all this if you read Zvi Mowshowitz. Broadly speaking, there’s nothing I can possibly say about AI safety policy that Zvi hasn’t already said in 100x more detail, anticipating and responding to every conceivable counterargument. I have no clue how he does it, but if you have any interest in these matters and you aren’t already reading Zvi, start.
Regardless of any setbacks, the work of AI safety continues. I am not and have never been a Yudkowskyan … but still, given the empirical shock of the past four years, I’m now firmly, 100% in the camp that we need to approach AI with humility for the magnitude of civilizational transition that’s about to occur, and for our massive error bars about what exactly that transition will entail. We can’t just “leave it to the free market” any more than we could’ve left the development of thermonuclear weapons to the free market.
And yes, whether in academia or working with AI companies, I’ll continue to think about what theoretical computer science can do for technical AI safety. Speaking of which, I’d love to hire a postdoc to work on AI alignment and safety, and I already have interested candidates. Would any person of means who reads this blog like to fund such a postdoc for me? If so, shoot me an email!