Louis Nirenberg

What's new 2020-01-27

I just heard the news that Louis Nirenberg died a few days ago, aged 94.  Nirenberg made a vast number of contributions to analysis and PDE (and his work has come up repeatedly on my own blog); I wrote about his beautiful moving planes argument with Gidas and Ni to establish symmetry of ground states in this post on the occasion of him receiving the Chern medal, and on how his extremely useful interpolation inequality with Gagliardo (generalising a previous inequality of Ladyzhenskaya) can be viewed as an amplification of the usual Sobolev inequality in this post.  Another fundamentally useful inequality of Nirenberg is the John-Nirenberg inequality established with Fritz John: if a (locally integrable) function f: {\bf R} \to {\bf R} (which for simplicity of exposition we place in one dimension) obeys the bounded mean oscillation property

\displaystyle \frac{1}{|I|} \int_I |f(x)-f_I|\ dx \leq A \quad (1)

for all intervals I, where f_I := \frac{1}{|I|} is the average value of f on I, then one has exponentially good large deviation estimates

\displaystyle \frac{1}{|I|} |\{ x \in I: |f(x)-f_I| \geq \lambda A \}| \leq \exp( - c \lambda ) \quad (2)

for all \lambda>0 and some absolute constant c.  This can be compared with Markov’s inequality, which only gives the far weaker decay

\displaystyle \frac{1}{|I|} |\{ x \in I: |f(x)-f_I| \geq \lambda A \}| \leq \frac{1}{\lambda}. \quad (3)

The point is that (1) is assumed to hold not just for a given interval I, but also all subintervals of I, and this is a much more powerful hypothesis, allowing one for instance to use the standard Calderon-Zygmund technique of stopping time arguments to “amplify” (3) to (2).  Basically, for any given interval I, one can use (1) and repeated halving of the interval I until significant deviation from the mean is encountered to locate some disjoint exceptional subintervals J where f_J deviates from f_I by O(A), with the total measure of the J being a small fraction of that of I (thanks to a variant of (3)), and with f staying within O(A) of f_I at almost every point of I outside of these exceptional intervals.  One can then establish (2) by an induction on \lambda.  (There are other proofs of this inequality also, e.g., one can use Bellman functions, as discussed in this old set of notes of mine.)   Informally, the John-Nirenberg inequality asserts that functions of bounded mean oscillation are “almost as good” as bounded functions, in that they almost always stay within a bounded distance from their mean, and in fact the space BMO of functions of bounded mean oscillation ends up being superior to the space L^\infty of bounded measurable functions for many harmonic analysis purposes (among other things, the space is more stable with respect to singular integral operators).

I met Louis a few times in my career; even in his later years when he was wheelchair-bound, he would often come to conferences and talks, and ask very insightful questions at the end of the lecture (even when it looked like he was asleep during much of the actual talk!).  I have a vague memory of him asking me some questions in one of the early talks I gave as a postdoc; I unfortunately do not remember exactly what the topic was (some sort of PDE, I think), but I was struck by how kindly the questions were posed, and how patiently he would listen to my excited chattering about my own work.