My podcast with Dan Faggella
Shtetl-Optimized 2024-09-16
Dan Faggella recorded an unusual podcast with me that’s now online. He introduces me as a “quantum physicist,” which is something that I never call myself (I’m a theoretical computer scientist) but have sort of given up on not being called by others. But the ensuing 85-minute conversation has virtually nothing to do with physics, or anything technical at all.
Instead, Dan pretty much exclusively wants to talk about moral philosophy: my views about what kind of AI, if any, would be a “worthy successor to humanity,” and how AIs should treat humans and vice versa, and whether there’s any objective morality at all, and (at the very end) what principles ought to guide government regulation of AI.
So, I inveigh against “meat chauvinism,” and expand on the view that locates human specialness (such as it is) in what might be the unclonability, unpredictability, and unrewindability of our minds, and plead for comity among the warring camps of AI safetyists.
The central point of disagreement between me and Dan ended up centering around moral realism: Dan kept wanting to say that a future AGI’s moral values would probably be as incomprehensible to us as are ours to a sea snail, and that we need to make peace with that. I replied that, firstly, things like the Golden Rule strike me as plausible candidates for moral universals, which all thriving civilizations (however primitive or advanced) will agree about in the same way they agree about 5 being a prime number. And secondly, that if that isn’t true—if the morality of our AI or cyborg descendants really will be utterly alien to us—then I find it hard to have any preferences at all about the future they’ll inhabit, and just want to enjoy life while I can! That which (by assumption) I can’t understand, I’m not going to issue moral judgments about either.
Anyway, rewatching the episode, I was unpleasantly surprised by my many verbal infelicities, my constant rocking side-to-side in my chair, my sometimes talking over Dan in my enthusiasm, etc. etc., but also pleasantly surprised by the content of what I said, all of which I still stand by despite the terrifying moral minefields into which Dan invited me. I strongly recommend watching at 2x speed, which will minimize the infelicities and make me sound smarter. Thanks so much to Dan for making this happen, and let me know what you think!
Added: See here for other podcasts in the same series and on the same set of questions, including with Nick Bostrom, Ben Goertzel, Dan Hendrycks, Anders Sandberg, and Richard Sutton.