A Failure to Communicate

Computational Complexity 2025-03-19

With care you can explain major ideas and results in computational complexity to the general public, like the P v NP problem, zero-knowledge proofs, the PCP theorem and Shor's factoring algorithms in a way that a curious non-scientist can find interesting. Quanta magazine keeps coming back to complexity. because we have a inherently interesting field.

So why am I having such a difficult time with the new Ryan Williams result, that time can be simulated in nearly quadratically less memory, or more precisely DTIME(\(t(n)\)) \(\subseteq\) DSPACE(\(\sqrt{t(n)\log t(n)}\)), based on the Cook-Mertz space-efficient tree evaluation algorithm.

Many results in complexity are quite specialized and technical but this shouldn't be one of them. Ryan's result involves a major new relationship between time and memory, the two most basic resource measures that goes back to the 1960s. Everybody understands time--waiting while their computer or phone is spinning. They know about memory, at least that you have to pay more for an iPhone with more of it. Yet still I'm challenged to explain this result.

Maybe because for historical reasons we use SPACE instead of MEMORY, or the pesky \(\log t(n)\) factor. Someone said they got lost in the logarithms. But we can talk about the gist of the result without mention space or logarithms.

It might be the \(t(n)\). We take for granted that we measure resources as a function of the problem size but outsiders could find it a challenging concept. For P v NP I just talk about efficient computation, here you can't avoid more specific running times. 

Or maybe people don't think about time and memory. Most operations on a computer happen close to instantaneously. Nobody thinks about memory unless they run out of it. 

Or because we don't have good real-world examples. When would you prefer a space \(\sqrt{t(n)}\) time \(2^\sqrt{t(n)}\) procedure to a space \(t(n)\) time \(t(n)\) algorithm? 

AI is supposed to be good at this. OK, Claude take it away.

Ryan Williams' breakthrough paper shows that computers can solve complex problems using far less memory than previously thought possible. For 50 years, computer scientists believed that problems requiring a certain amount of time (t) needed at least t divided by a small factor of memory. Williams proved that these same problems can actually be solved using only about the square root of t memory. He achieved this by connecting these problems to the Tree Evaluation problem and leveraging a recent algorithm by Cook and Mertz. This result makes significant progress toward resolving the longstanding P versus PSPACE problem, suggesting that many algorithms could potentially run efficiently on devices with much less memory than we currently use.

Imagine a navigation app trying to find the optimal route through a complex city network. Before this discovery, engineers believed that calculating detailed routes required either substantial memory or accepting slower performance on memory-limited devices. Williams' theorem suggests these calculations could run using dramatically less memory—potentially reducing requirements from 100 MB to just 10 KB (roughly the square root). This breakthrough could enable sophisticated navigation features on devices with severe memory constraints, such as smart watches, older phones, or embedded car systems, allowing them to handle complex routing problems with multiple stops or constraints without performance degradation.