A New AKS
Gödel’s Lost Letter and P=NP 2023-07-01
An award for attacking an NP-hard problem
Miklós Ajtai, Ravi Kumar, and D. Sivakumar were among winners of the ACM STOC 2023 “Test of Time” Awards. The award recognized their joint paper from STOC 2001 titled, “A sieve algorithm for the shortest lattice vector problem.”
Today we congratulate them and the other winners.
The awards recognize papers with the most enduring impact from STOC conferences 10, 20, and 30 years prior. The prior period can extend back up to 4 years, though the years 1993, 2003, and 2013 received the freshest consideration. The other winners are:
- Ajit Agrawal, Philip Klein, and R. Ravi for the STOC 1989 paper: When trees collide: An approximation algorithm for the generalized Steiner problem on networks.
- Ethan Bernstein and Umesh Vazirani for the STOC 1993 paper titled simply: Quantum complexity theory.
- Oded Regev for his STOC 2003 paper, —a nice pairing with AKS: New lattice-based cryptographic constructions.
- Kenneth Clarkson and David Woodruff for their STOC 2013 paper: Low rank approximation and regression in input sparsity time.
The AKS Time Test
Where did we get the idea to call their paper “AKS”? Well, the award citation says:
The Ajtai-Kumar-Sivakumar (AKS) paper ushered in a new chapter in the area of lattice algorithms, adding the powerful new technique of sieving to the already formidable toolkit in the field. The AKS algorithm solved the exact shortest vector problem (SVP) on n-dimensional lattices, and presented dramatic improvement over the celebrated Lenstra-Lenstra-Lovasc (LLL) algorithm which solved SVP. It has continued to have impact and appears to be the best possible theoretically and is perhaps suprisingly having some practical impact.
Hold on a second here. For the last millennium and a couple years after, “AKS” meant only one easily recognized item: the STOC 1983 paper by Ajtai, János Komlós, and Endre Szemerédi proving it possible to construct sorting netowrks of size. Thus: “AKS sorting network” or just “AKS” sufficed. Note it is the same Ajtai. (Our friend Bill Gasarch might chime in that in math this trio is more famous for proving the ultimately-tight upper bound on the Ramsey numbers , but that paper wasn’t at STOC.)
But in this millennium, “AKS” took over to mean the beautiful 2002 theorem of Manindra Agrawal, Neeraj Kayal, and Nitin Saxena showing that the set of prime numbers belongs to deterministic polynomial time. Thus at Wikipedia: AKS primality test. And Scott Aaronson: “The Prime Facts: From Euclid to AKS.” This paper wasn’t at STOC but promptly won the 2006 Gödel Prize, not to mention the 2006 Fulkerson Prize in mathematics.
There isn’t a 40-year Test of Time category, else we could have faced off the first and third AKS head-to-head. But we can still ponder: 10 more years ago, which “AKS” will own the acronym?
A Claimer
I, Ken, should make a “disclaimer”—Sivakumar was my second PhD graduate at Buffalo, and he is on our department’s advisory board. But I can turn this around into a “claimer” for some material Buffalo connection. He came to visit several times right after his 1997 graduation, when Jin-Yi Cai was still here. Jin-Yi and his student Ajay Nerurkar had a FOCS 1997 paper improving Ajtai’s worst-to-average-case reduction bounds for lattice problems in a STOC 1996 paper. This spurred Siva’s interest in lattice problems, which perked even more when Siva joined IBM Almaden and gained Ajtai and Ravi as colleagues. This included discussing an idea from computational learning theory that could be applied in this context.
There is also a Euclid connection here. As Siva told it to me and Buffalo colleagues privately:
One can think of this work as solving a “generalized GCD” problem. In the standard GCD problem, you’re given two numbers and you want to find the greatest common divisor. In the shortest lattice vector problem, instead of numbers, you’re given vectors in dimensions; and instead of two, you’re given of them. Euclid (in 300 BC) gave an algorithm for computing the GCD. The generalized vector version was formulated by German mathematician Hermann Minkowski in 1899. The problem has a rich history in CS…
Indeed, whereas Euclid’s Algorithm runs in quadratic time, the shortest vector problem is NP-hard. The key connection to lattices is that is the smallest positive number one can obtain as a linear combination where and are integers. For instance, and = 3, from which it follows that every multiple of arises as an integer linear combination .
But now suppose we have two dimensions and the vectors and . Then we get . It is not possible to get or , however, nor . Make instead, however, and none of these points is possible: every nonzero vector must have one entry of absolute value at least . This shows some of the subtlety.
How It Works
Siva continues:
The previously best known algorithm for this “generalized GCD problem” ran in time exponential in ; with Ravi Kumar and Miki Ajtai, we gave an algorithm that runs in time exponential in . Still exponential time, but only in , not —this turns out to be a big deal. Our algorithm introduced a new technique that we dubbed “sieving” since we repeatedly filter elements from a set. This technique has blossomed not only in this area but also in cryptanalysis: attempting to break cryptosystems to test their security.
The AKS paper itself states the genesis in “a learning algorithm of Blum, Kalai, and Wasserman [from STOC 2000] and the worst-case/average-case reduction of lattice problems due to Ajtai.” It continues:
Our algorithm works in the following way. Given a basis of the lattice, we uniformly choose random lattice points, all inside a sufficiently large parallelepiped, and perturb the lattice points by small quantities. Then, we iteratively apply a “sieving” procedure to these points. The basic property of this step is that given a set of perturbed lattice vectors with a bound on the longest vector in the set, we produce a new set of perturbed lattice vectors such that the longest vector is now about half of the original length while the set itself does not shrink by more than half. The iteration is applied until we obtain a set of vectors in a ball of constant radius. Finally, we argue that (due to the nature of the perturbations) there is a reasonable chance that a short lattice vector and its “nearest neighbor” in the lattice are both discovered by this process. Taking pairwise differences then gives the shortest vector.
This is still an exponential time algorithm. But it is practicable enough—also in de-randomized and otherwise improved forms leading to a 2015 paper by Divesh Aggarwal, Daniel Dadush, and Noah Stephens-Davidowitz—to expose flaws in concrete proposed implementations of lattice-based cryptography. The basic crypto idea is simply presented by Niklas Körtge here. The attraction is that the NP-hardness of the simple cracking method and the worst-case/average-case hardness connection promise resistance to quantum algorithms—even if and when they can be scaled up to conquer RSA and other systems based on factoring or forms of discrete log. NP-hardness is often no barrier at practical instance sizes. The AKS algorithm, and others based on related ideas, furnish valuable testing grounds for “post-quantum cryptography” via lattices.
Open Problems
For all the improvements to the AKS algorithm and other lattice sieving methods, they still take exponential space as well as time. Can the space be economized?
Again, we congratulate all the winners.