Ramanujan+100
Peter Cameron's Blog 2020-12-15
I have just spent the last four days in Kochi, Kerala, at the International Conference on Number Theory and Discrete Mathematics, commemorating Srinivasa Ramanujan, the great Indian mathematician, on the 100th anniversary of his far-too-early death.
The conference had perhaps more discrete mathematics than number theory. I did not attend every session, which would have involved getting up before 4am every day, but took in over half of the plenary lectures and a couple of others.
Why number theory and discrete mathematics? Ramanujan’s work was in number theory, although some of it (for example the work with Hardy on the asymptotics of the partition function) was certainly on the boundary of discrete mathematics. I think a better answer to the question is as follows.
There is a remarkable connection between the smallest non-trivial Laplacian eigenvalue of a graph and important global properties of the graph such as isoperimetric number and speed of convergence of the random walk. The larger the smallest eigenvalue, the better network properties the graph has.
(The Laplacian eigenvalues are the eigenvalues of the symmetric matrix D−A, where A is the usual adjacency matrix and D is the diagonal matrix whose entries are the valencies of the vertices. This matrix is positive semidefinite, and the multiplicity of the “trivial” zero eigenvalue is equal to the number of connected components. From the definition, we see that if the graph is regular, say with valency k, then the Laplacian eigenvalues are obtained by subtracting the adjacency matrix eigenvalues from k. Hence having the first non-zero Laplacian eigenvalue large is equivalent to having a big gap between the largest and second-largest eigenvalue of the adjacency matrix.)
The Alon–Boppana theorem asserts that, for given k, there are only finitely many connected regular graphs of valency k with second eigenvalue greater than 2√(k−1). Thus an infinite family of such graphs will almost all have second eigenvalue below this bound. If the limit superior of the second eigenvalue meets this bound, the family is called a family of Ramanujan graphs. The name was given by Lubotzky, Phillips and Sarnak, who, along with Margulis, were the first people to find such a family; it marks the fact that the proof used a conjecture by Ramanujan on sums of four squares. The result they needed had been proved by Igusa. This is one of my favourite papers, in Combinatorica in 1988 and free to download; I urge you to read it.
Anyway, this connection opens the way for discrete mathematicians to invoke and honour the name of Ramanujan, as many speakers in the conference did.
Peter Sarnak gave a similar talk to his Hardy lecture earlier this year, which I have discussed here. The Alon–Boppana theorem and the construction of Ramanujan graphs shows that there is a “gap” in the spectra of connected trivalent graphs from 2√2 to 3 which contains only finitely many graphs, and that it is maximal in the sense that no longer interval containing it has this property.
There were several talks on Ramanujan. George Andrews proposed his conjecture about how Ramanujan hit upon the mock theta functions, one of his last pieces of work. His talk described an analysis, using things that Ramanujan knew very well, which would lead to these functions. This no doubt would have been “chalk and talk” in earlier times; George wrote it all out in longhand in real time, even the complicated double summations.
One of the most remarkable talks was by Neal Koblitz. G. H. Hardy described number theory as “gentle and clean”, and Neal set out to convince us that applied mathematics is not always so. He began by describing two failures of technology (or applied mathematics) which caused very great harm in the USA. First was a prediction based on epidemic modelling from a group in his own university, which suggested that if masks were worn and social distancing observed, the pandemic would be over by June and the death toll would be limited to 60000. The President and his advisers heard the conclusion but not the sufficient condition, failed to enforce masks or social distancing, and the rest is history. The second was over-reliance on cryptography in the Test-and-Trace system in the US. The modellers had failed to realise that Bluetooth signals travel through walls and floors, and so people living in small apartments are more likely to be reported by the system as having been in contact with an infected person; moreover, this group are disproportionately poor and black, and will be much harder impacted by the result of this, standing to lose their jobs and possibly even face eviction. The modellers were unaware of this because the reliance on cryptography meant that these patterns were not visible.
He moved on to talk about quantum computing. He took us through the basics, and described Shor’s algorithm for factorization in detail, keeping an eye on how many qubit registers are required. For current sizes of integers used in RSA, this number is somewhere around 4000. Each qubit register needs thousands of classical gates to preserve it from interference. He thinks it unlikely that such a machine will be built this century. Moreover, cryptographers have had plenty of notice, and have developed quantum-safe cryptographic methods. He ended with some comments about how bad we are at predicting future developments in science and technology. In the golden age of science fiction in the 1950s and 1960s, it was thought that space travel would become commonplace by the end of the 20th century, and that nuclear power would become cheap and portable (even nuclear-powered wristwatches were suggested!). The writers completely failed to predict the miniaturization of computing technology, and the resulting growth of the internet and social media.
One talk that caught my attention was by Kannan Soundarajan. Apparently it is known that, if you choose a random character of the symmetric group and evaluate it on a random group element, then the probability that the result is 0 tends to 1 as the degree of the symmetric group grows. He and his coauthor had attacked a different question: what if instead you choose a random entry of the character table (that is, a random character evaluated on elements of a random conjugacy class)? What changes is that we are choosing a random conjugacy class instead of a random group element. The conjugacy class sizes in the symmetric group vary from tiny (the transpositions) to huge, essentially a fraction 1/n of the group order for the symmtric group of degree n, for elements with a cycle of length close to n. They cannot prove that the entry is zero with high probability, but can show that for a fixed prime the entry is divisible by that prime with high probability. (“Random” here means “from the uniform distribution”.)
I could see no reason for choosing the character at random. It seemed to me more natural to use the Plancherel measure, which makes the probability of a character proportional to the square of its degree (so that, in the symmetric group, the probability of the character indexed by a partition λ is equal to the number of pairs of Young tableaux of shape λ divided by n!. This gives two further problems, since we may choose a random element or a random conjugacy class. I don’t know whether this variant is easier or harder than using the uniform distribution.
As to distribution of group elements, there has been a lot of research on choosing a random element in an arbitrary group, for example using the “product replacement algorithm”. For a random conjugacy class, this can be chosen by taking a random walk on the commuting graph. In fact in the symmetric group it is easy to choose a random element: take an integer between 0 and n!−1, write it in “factoradic” form, and decode this into a permutation.
I will finish this account with a brief mention of the public lecture by Ram Murty, entitled “Ramanujan, number theory and the pandemic”. Ramanujan had been faced with the problem of finding a power series expansion for the Lambert w-function, the inverse of the function mapping z to zez. To do this, he re-invented the Lagrange inversion formula. The Lambert function appears in solving the basic differential equation of the SIR model (susceptible, infected, recovered) of the spread of an epidemic.