"Never give AI the Nuclear Codes" Really? Have you seen Dr. Strangelove?

Computational Complexity 2023-07-23

(I covered a simlar topic here.) 

 In the June 2023 issue of The Atlantic is an article titled:

                                    Never Give AI Intelligence the Nuclear Codes

                                    by Ross Andersen

(This might be a link to it: here. Might be behind a pay wall.) 

As you can tell from the title they are against having AI able to launch nuclear weapons. 

I've read similar things elsewhere. 

There is a notion that PEOPLE will be BETTER able to discern when a nuclear strike is needed (and, more importantly, NOT needed) then an AI.  Consider the following two scenarios:

1) An AI has the launch codes and thinks a nuclear strike is appropriate (but is wrong).  It does so and there is no overide for a human to intervene. 

2) A Human knows in his gut (and some of what he sees) that a nuclear attack is needed. The AI says NO ITS NOT (The AI knows A LOT more than the human and is correct). The human overides the AI (pulls out the plug?) and launches the attack. 

Frankly I think (2) is more likely than (1). Perhaps there should be a mechanism so that  BOTH the AI and the Human have to agree. Of course, both AI's and Humans are clever and may find a way to overide. 

Why do people think that (1) is more likely than (2)? Because they haven't seen the movie Dr. Strangelove. They should!