Independence Day 2046?
Gödel’s Lost Letter and P=NP 2023-07-05
Plus backfeed on AI articles in today’s New York Times
Will Smith may still be spry when we need him again. His first Oscar-worthy punch took out an alien in the 1996 movie Independence Day. In the 2004 movie I, Robot he fended off an AI incursion by the same means:
Still from sourceToday we propose our own dystopian movie script but then continue our discussion of how seriously close to reality this all may be.
Script Outline
The year is 2046. This is thirteen years after the cessation of democracy in the US. That came about on August 24, 2033—not as a result of the 2032 election or the Supreme Court being packed up to sixty-seven justices, but from a military AI project gone rogue.
The project began as a joint effort of Google DeepMind in London and the French National AI Research Programme based at INRIA. Following the devolution of the Third Offset and its chess-inspired mode of human-computer teaming, attention shifted to simulating the evolution of battle cohesion in companies of robot soldiers. DeepMind and INRIA formed Offenbach R.M.C. The initials stand for Robotic Military Company and also for the project’s human mastermind, named Ross Maharal Coppel.
Their crowning achievement was the formation of the Sandman Corps., led by the android Olympia Hoffmann. Before an exhibition at the Aberdeen Proving Ground in Maryland, Coppel disables an AI guardrail to give Olympia greater autonomy. Alas this exposes an OCaml security flaw owing to the tragic missed opportunity of standardizing the robots’ ML command language. The rogue MI6 agent Messias Spalanzani, who left DeepMind in a dispute with Coppel, thereby injects Moscow ML code to turn Sandman against its overseers.
Known to Spalanzani but not Coppel, the AIs had already leveraged DeepMind’s own version of the tensor simulation of quantum circuits to create a cargo cult factoring algorithm. Offenbach R.M.C. uses it to break into Aberdeen’s systems and commandeers thousands of military vehicles. Advancing along the I-95 highway, they capture Baltimore on the way to Washington, having already disabled Pentagon response mechanisms. Ross joins in and leads the sack and burning of the White House, while Spalanzani, having engineered a simultaneous coup in the UK, proclaims America re-annexed.
After a prologue showing these events, the movie opens with Smith playing Alan Ether, a security tech for the University of Vermont and opera buff who heads a club called Monteverdi. Along lines of this, they employ musical motifs to interface with and penetrate the Offenbach protocols. We will leave the action and ending of the movie to your own imaginations.
Seriously, Now—Math First?
We have already mentioned the Center for AI Safety’s statement signed by numerous AI leaders and academics and some other celebrities about the threat from AI gone amok:
“Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”
We are not in the center of this arena, but have enough perches in the upper rows to look for harbingers. Today’s New York Times Science section rocks one of our perches with an article on AI and mathematics. The print version is neutrally titled “A Complex Equation,” but the online version bears the blunt declaration, “A.I. Is Coming for Mathematics, Too.” Among the first paragraphs, it states:
In 2019, Christian Szegedy, a computer scientist formerly at Google and now at a start-up in the Bay Area, predicted that a computer system would match or exceed the problem-solving ability of the best human mathematicians within a decade. Last year he revised the target date to 2026.
Note—this isn’t 2033 or 2046, but 2026. This sounds serious, and one would expect the article to have portentous examples to match. Its actual content turns out to be mostly about proof assistants.
Now we did two posts back in 2011 about a system at IBM Watson’s level doing creative mathematics. We have covered some assisted proofs and proof assistants, all referenced in the article except for HoTT. Here are two markers for harbingers that our perch entitles us to throw down:
- Do we see an instance of AI devising a markedly new algorithmic idea? For a concrete test: can it get a paper accepted to the Innovations conference?
- Does it yet upend the paradigm we have discussed of proof as a human social process?
On the first, perhaps AlphaFold is the one impressive example mentioned by the article—alongside what we have said about AlphaZero for playing chess and other games. But for the other examples, color us underwhelmed. Using computers to generate and check proofs that we already have in mind is not mindfulness along lines of a quote from the Fields Medalist Akshay Venkatesh. Even the ML programming language, which we joked about above, began as a proto-proof assistant over 50 years ago.
The article also mentions a chatbot said to score higher than teenagers on high-school math exams. On that we pivot to another source.
Springtime For AI? Or Winter?
Melanie Mitchell writes the Substack AI Guide blog. She is an expert on it and should be followed much better than most—including us.
Earlier this year she posted a part 1 and part 2 with skeptical takes on the assertion that ChatGPT passed graduate level exams.
She has this to say in general:
Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI Spring”) and periods of disappointment, loss of confidence, and reduced funding (“AI Winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself.
See also this:
This rise and fall suggest to her that the fear of the above-mentioned experts is overstated. What do you think?
Open Problems
The same NYT issue has a second article saying that AI is already poised to invade astrology. Is an invasion by AI of more-dangerous areas really imminent?
We tried to find a call-for-papers for ITCS 2024 to link in our proposed “AI test” above. Is there one?
[some tweaks after getting July 4 date]