Possible Impossibilities and Impossible Possibilities

Gödel’s Lost Letter and P=NP 2023-10-14

A livestreamed talk by Yejin Choi at TTIC on Monday 10/16, 11:30am CT

MacArthur Foundation source

Yejin Choi is a professor and a MacArthur Fellow at the Paul G. Allen School of Computer Science & Engineering at the University of Washington. She is also a Distinguished Research Fellow at Oxford’s new Institute for Ethics in AI.

She is about to give a talk in the TTIC 20 Years Distinguished Lecture Series on Monday, October 16th at 11:30 AM CT. The title is the title of this post. Here is her abstract:

Generative AI has led to an unprecedented amount of global attention—both excitements and concerns, in part due to our relatively limited understanding about intelligence—both artificial and natural. In this talk, I will question if there can be possible impossibilities of large language models (i.e., the fundamental limits of transformers, if any) and the impossible possibilities of language models (i.e., seemingly impossible alternative paths beyond scale, if at all). I will then discuss the Generative AI Paradox hypothesis: for AI, at least in its current form, generation capability may often exceed understanding capability, in stark contrast to human intelligence where generation (of e.g., novels, paintings) can be substantially harder than understanding.

The talk will be available next week online: via Panopto (Livestream). This requires what seems to be a one-click-and-done registration.

Nous

Ken spent a lot of time in Britain, and one of the words he picked up there is nous—pronounced “nouse” and sometimes written “nowse.” He says that Wikipedia does a good job of circling around the meaning he picked up there:

  1. “Nous is … the faculty of the human mind necessary for understanding what is true or real.”

  2. “In colloquial British English, nous also denotes ‘good sense’, which is close to one everyday meaning it had in Ancient Greece.”

  3. “…Described as equivalent to perception … something like ‘awareness’ … comparable to the modern concept of intuition.”

Ken’s more particular meaning is “knowledge of how things hang together,” especially in human spheres where more is at stake than the term “street smarts” generally conveys.

Ken thinks—before having heard her talk or read any of her works (no time)—that this is the kind of ‘understanding’ that Choi is talking about. A simple understanding of this kind is that if Joe Bloggs and Mary Bloggs are a well-known married couple, then they cannot also be siblings. They cannot both be children of Gil Bloggs. We demonstrated GPT 3.5 and 4.0’s current ignorance of this point.

Now we’re not saying that GPT versions will never understand this common-sense point. It may be patched in by the next update. Our question is whether GPT and similar generative models will be able to acquire this kind of knowledge naturally—and at pace.

This seems to be what Choi is doubting when she writes, “generation capability may often exceed understanding capability, in stark contrast to human intelligence.” This is her “possible impossibility.” Thus her “Generative AI Paradox” may boil down to a complexity question, after all. At least in complexity we have a plethora of commonly-believed impossibilities that we haven’t proved beyond being possible (with respect to some oracle, say).

I want to move on to the first of Wikipedia’s three points of nous—understanding what is true or real. This lack of understanding is behind AI “hallucinations” and a whole lot more.

Announcement

Wait—I just found this article in the New York Times Science section by editor Sarah Jeong:

SCIENCE: Has {P\neq NP} finally been solved? The claim has now been checked by several top math experts and there is some hope that it is correct. The proof is by a clever insight…

Just kidding. This is fake. Online fake news—news designed to intentionally deceive—has recently emerged as a major societal problem. Defending Against Neural Fake News is a paper by Choi and her colleagues: Rowan Zellers, Ari Holtzman, Hannah Rashkin, Yonatan Bisk, Ali Farhadi, Franziska Roesner.

The issue is, how can we tell whether or not a news item is real or fake? Is the New York Times any different from seeing {P\neq NP} finally solved on ResearchGate? Sounds hard to believe, but could it really be true? What if the article had said that {P=NP} is proved—is that less likely to be true?

Open Problems

Are we going to see more fake news in the future? Or will tools made to detect such articles come to save us? Will AI give us nous or fake nous? We will see.